The Price of Adaptivity in Stochastic Convex Optimization

Yair Carmon, Oliver Hinder
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:772-774, 2024.

Abstract

We prove impossibility results for adaptivity in non-smooth stochastic convex optimization. Given a set of problem parameters we wish to adapt to, we define a “price of adaptivity” (PoA) that, roughly speaking, measures the multiplicative increase in suboptimality due to uncertainty in these parameters. When the initial distance to the optimum is unknown but a gradient norm bound is known, we show that the PoA is at least logarithmic for expected suboptimality, and double-logarithmic for median suboptimality. When there is uncertainty in both distance and gradient norm, we show that the PoA must be polynomial in the level of uncertainty. Our lower bounds nearly match existing upper bounds, and establish that there is no parameter-free lunch.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-carmon24a, title = {The Price of Adaptivity in Stochastic Convex Optimization}, author = {Carmon, Yair and Hinder, Oliver}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {772--774}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/carmon24a/carmon24a.pdf}, url = {https://proceedings.mlr.press/v247/carmon24a.html}, abstract = {We prove impossibility results for adaptivity in non-smooth stochastic convex optimization. Given a set of problem parameters we wish to adapt to, we define a “price of adaptivity” (PoA) that, roughly speaking, measures the multiplicative increase in suboptimality due to uncertainty in these parameters. When the initial distance to the optimum is unknown but a gradient norm bound is known, we show that the PoA is at least logarithmic for expected suboptimality, and double-logarithmic for median suboptimality. When there is uncertainty in both distance and gradient norm, we show that the PoA must be polynomial in the level of uncertainty. Our lower bounds nearly match existing upper bounds, and establish that there is no parameter-free lunch.} }
Endnote
%0 Conference Paper %T The Price of Adaptivity in Stochastic Convex Optimization %A Yair Carmon %A Oliver Hinder %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-carmon24a %I PMLR %P 772--774 %U https://proceedings.mlr.press/v247/carmon24a.html %V 247 %X We prove impossibility results for adaptivity in non-smooth stochastic convex optimization. Given a set of problem parameters we wish to adapt to, we define a “price of adaptivity” (PoA) that, roughly speaking, measures the multiplicative increase in suboptimality due to uncertainty in these parameters. When the initial distance to the optimum is unknown but a gradient norm bound is known, we show that the PoA is at least logarithmic for expected suboptimality, and double-logarithmic for median suboptimality. When there is uncertainty in both distance and gradient norm, we show that the PoA must be polynomial in the level of uncertainty. Our lower bounds nearly match existing upper bounds, and establish that there is no parameter-free lunch.
APA
Carmon, Y. & Hinder, O.. (2024). The Price of Adaptivity in Stochastic Convex Optimization. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:772-774 Available from https://proceedings.mlr.press/v247/carmon24a.html.

Related Material