Minimax Bounds on Stochastic Batched Convex Optimization

John Duchi, Feng Ruan, Chulhee Yun
Proceedings of the 31st Conference On Learning Theory, PMLR 75:3065-3162, 2018.

Abstract

We study the stochastic batched convex optimization problem, in which we use many \emph{parallel} observations to optimize a convex function given limited rounds of interaction. In each of $M$ rounds, an algorithm may query for information at $n$ points, and after issuing all $n$ queries, it receives unbiased noisy function and/or (sub)gradient evaluations at the $n$ points. After $M$ such rounds, the algorithm must output an estimator. We provide lower and upper bounds on the performance of such batched convex optimization algorithms in zeroth and first-order settings for Lipschitz convex and smooth strongly convex functions. Our rates of convergence (nearly) achieve the fully sequential rate once $M = O(d \log \log n)$, where $d$ is the problem dimension, but the rates may exponentially degrade as the dimension $d$ increases, in distinction from fully sequential settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-duchi18a, title = {Minimax Bounds on Stochastic Batched Convex Optimization}, author = {Duchi, John and Ruan, Feng and Yun, Chulhee}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {3065--3162}, year = {2018}, editor = {Bubeck, Sébastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/duchi18a/duchi18a.pdf}, url = {https://proceedings.mlr.press/v75/duchi18a.html}, abstract = { We study the stochastic batched convex optimization problem, in which we use many \emph{parallel} observations to optimize a convex function given limited rounds of interaction. In each of $M$ rounds, an algorithm may query for information at $n$ points, and after issuing all $n$ queries, it receives unbiased noisy function and/or (sub)gradient evaluations at the $n$ points. After $M$ such rounds, the algorithm must output an estimator. We provide lower and upper bounds on the performance of such batched convex optimization algorithms in zeroth and first-order settings for Lipschitz convex and smooth strongly convex functions. Our rates of convergence (nearly) achieve the fully sequential rate once $M = O(d \log \log n)$, where $d$ is the problem dimension, but the rates may exponentially degrade as the dimension $d$ increases, in distinction from fully sequential settings.} }
Endnote
%0 Conference Paper %T Minimax Bounds on Stochastic Batched Convex Optimization %A John Duchi %A Feng Ruan %A Chulhee Yun %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E Sébastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-duchi18a %I PMLR %P 3065--3162 %U https://proceedings.mlr.press/v75/duchi18a.html %V 75 %X We study the stochastic batched convex optimization problem, in which we use many \emph{parallel} observations to optimize a convex function given limited rounds of interaction. In each of $M$ rounds, an algorithm may query for information at $n$ points, and after issuing all $n$ queries, it receives unbiased noisy function and/or (sub)gradient evaluations at the $n$ points. After $M$ such rounds, the algorithm must output an estimator. We provide lower and upper bounds on the performance of such batched convex optimization algorithms in zeroth and first-order settings for Lipschitz convex and smooth strongly convex functions. Our rates of convergence (nearly) achieve the fully sequential rate once $M = O(d \log \log n)$, where $d$ is the problem dimension, but the rates may exponentially degrade as the dimension $d$ increases, in distinction from fully sequential settings.
APA
Duchi, J., Ruan, F. & Yun, C.. (2018). Minimax Bounds on Stochastic Batched Convex Optimization. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:3065-3162 Available from https://proceedings.mlr.press/v75/duchi18a.html.

Related Material