Open Problem: Black-Box Reductions and Adaptive Gradient Methods for Nonconvex Optimization

Xinyi Chen, Elad Hazan
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5317-5324, 2024.

Abstract

We describe an open problem: reduce offline nonconvex stochastic optimization to regret minimization in online convex optimization. The conjectured reduction aims to make progress on explaining the success of adaptive gradient methods for deep learning. A prize of 500 dollars is offered to the winner.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-chen24e, title = {Open Problem: Black-Box Reductions and Adaptive Gradient Methods for Nonconvex Optimization}, author = {Chen, Xinyi and Hazan, Elad}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {5317--5324}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/chen24e/chen24e.pdf}, url = {https://proceedings.mlr.press/v247/chen24e.html}, abstract = {We describe an open problem: reduce offline nonconvex stochastic optimization to regret minimization in online convex optimization. The conjectured reduction aims to make progress on explaining the success of adaptive gradient methods for deep learning. A prize of 500 dollars is offered to the winner.} }
Endnote
%0 Conference Paper %T Open Problem: Black-Box Reductions and Adaptive Gradient Methods for Nonconvex Optimization %A Xinyi Chen %A Elad Hazan %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-chen24e %I PMLR %P 5317--5324 %U https://proceedings.mlr.press/v247/chen24e.html %V 247 %X We describe an open problem: reduce offline nonconvex stochastic optimization to regret minimization in online convex optimization. The conjectured reduction aims to make progress on explaining the success of adaptive gradient methods for deep learning. A prize of 500 dollars is offered to the winner.
APA
Chen, X. & Hazan, E.. (2024). Open Problem: Black-Box Reductions and Adaptive Gradient Methods for Nonconvex Optimization. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:5317-5324 Available from https://proceedings.mlr.press/v247/chen24e.html.

Related Material