Lower Bounds for Higher-Order Convex Optimization

Naman Agarwal, Elad Hazan
Proceedings of the 31st Conference On Learning Theory, PMLR 75:774-792, 2018.

Abstract

State-of-the-art methods in mathematical optimization employ higher-order derivative information. We explore the limitations of higher-order optimization and prove that even for convex optimization, a polynomial dependence on the approximation guarantee and higher-order smoothness parameters is necessary. This refutes the hope that higher-order smoothness and higher-order derivatives can lead to dimension free polynomial time algorithms for convex optimization. As a special case, we show Nesterov’s accelerated cubic regularization method and higher-order methods to be nearly tight.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-agarwal18a, title = {Lower Bounds for Higher-Order Convex Optimization}, author = {Agarwal, Naman and Hazan, Elad}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {774--792}, year = {2018}, editor = {Bubeck, Sébastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/agarwal18a/agarwal18a.pdf}, url = {https://proceedings.mlr.press/v75/agarwal18a.html}, abstract = {State-of-the-art methods in mathematical optimization employ higher-order derivative information. We explore the limitations of higher-order optimization and prove that even for convex optimization, a polynomial dependence on the approximation guarantee and higher-order smoothness parameters is necessary. This refutes the hope that higher-order smoothness and higher-order derivatives can lead to dimension free polynomial time algorithms for convex optimization. As a special case, we show Nesterov’s accelerated cubic regularization method and higher-order methods to be nearly tight.} }
Endnote
%0 Conference Paper %T Lower Bounds for Higher-Order Convex Optimization %A Naman Agarwal %A Elad Hazan %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E Sébastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-agarwal18a %I PMLR %P 774--792 %U https://proceedings.mlr.press/v75/agarwal18a.html %V 75 %X State-of-the-art methods in mathematical optimization employ higher-order derivative information. We explore the limitations of higher-order optimization and prove that even for convex optimization, a polynomial dependence on the approximation guarantee and higher-order smoothness parameters is necessary. This refutes the hope that higher-order smoothness and higher-order derivatives can lead to dimension free polynomial time algorithms for convex optimization. As a special case, we show Nesterov’s accelerated cubic regularization method and higher-order methods to be nearly tight.
APA
Agarwal, N. & Hazan, E.. (2018). Lower Bounds for Higher-Order Convex Optimization. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:774-792 Available from https://proceedings.mlr.press/v75/agarwal18a.html.

Related Material