Lower Bounds for Parallel and Randomized Convex Optimization

Jelena Diakonikolas, Cristóbal Guzmán
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1132-1157, 2019.

Abstract

We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation. We show that the answer is negative for both deterministic and randomized algorithms applied to essentially any of the interesting geometries and nonsmooth, weakly-smooth, or smooth objective functions. In particular, we show that it is not possible to obtain a polylogarithmic (in the sequential complexity of the problem) number of parallel rounds with a polynomial (in the dimension) number of queries per round. In the majority of these settings and when the dimension of the space is polynomial in the inverse target accuracy, our lower bounds match the oracle complexity of sequential convex optimization, up to at most a logarithmic factor in the dimension, which makes them (nearly) tight. Prior to our work, lower bounds for parallel convex optimization algorithms were only known in a small fraction of the settings considered in this paper, mainly applying to Euclidean ($\ell_2$) and $\ell_\infty$ spaces. Our work provides a more general and streamlined approach for proving lower bounds in the setting of parallel convex optimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-diakonikolas19c, title = {Lower Bounds for Parallel and Randomized Convex Optimization}, author = {Diakonikolas, Jelena and Guzm\'{a}n, Crist\'{o}bal}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {1132--1157}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/diakonikolas19c/diakonikolas19c.pdf}, url = {https://proceedings.mlr.press/v99/diakonikolas19c.html}, abstract = {We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation. We show that the answer is negative for both deterministic and randomized algorithms applied to essentially any of the interesting geometries and nonsmooth, weakly-smooth, or smooth objective functions. In particular, we show that it is not possible to obtain a polylogarithmic (in the sequential complexity of the problem) number of parallel rounds with a polynomial (in the dimension) number of queries per round. In the majority of these settings and when the dimension of the space is polynomial in the inverse target accuracy, our lower bounds match the oracle complexity of sequential convex optimization, up to at most a logarithmic factor in the dimension, which makes them (nearly) tight. Prior to our work, lower bounds for parallel convex optimization algorithms were only known in a small fraction of the settings considered in this paper, mainly applying to Euclidean ($\ell_2$) and $\ell_\infty$ spaces. Our work provides a more general and streamlined approach for proving lower bounds in the setting of parallel convex optimization. } }
Endnote
%0 Conference Paper %T Lower Bounds for Parallel and Randomized Convex Optimization %A Jelena Diakonikolas %A Cristóbal Guzmán %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-diakonikolas19c %I PMLR %P 1132--1157 %U https://proceedings.mlr.press/v99/diakonikolas19c.html %V 99 %X We study the question of whether parallelization in the exploration of the feasible set can be used to speed up convex optimization, in the local oracle model of computation. We show that the answer is negative for both deterministic and randomized algorithms applied to essentially any of the interesting geometries and nonsmooth, weakly-smooth, or smooth objective functions. In particular, we show that it is not possible to obtain a polylogarithmic (in the sequential complexity of the problem) number of parallel rounds with a polynomial (in the dimension) number of queries per round. In the majority of these settings and when the dimension of the space is polynomial in the inverse target accuracy, our lower bounds match the oracle complexity of sequential convex optimization, up to at most a logarithmic factor in the dimension, which makes them (nearly) tight. Prior to our work, lower bounds for parallel convex optimization algorithms were only known in a small fraction of the settings considered in this paper, mainly applying to Euclidean ($\ell_2$) and $\ell_\infty$ spaces. Our work provides a more general and streamlined approach for proving lower bounds in the setting of parallel convex optimization.
APA
Diakonikolas, J. & Guzmán, C.. (2019). Lower Bounds for Parallel and Randomized Convex Optimization. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:1132-1157 Available from https://proceedings.mlr.press/v99/diakonikolas19c.html.

Related Material