[edit]
Open Problem: Black-Box Reductions and Adaptive Gradient Methods for Nonconvex Optimization
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:5317-5324, 2024.
Abstract
We describe an open problem: reduce offline nonconvex stochastic optimization to regret minimization in online convex optimization. The conjectured reduction aims to make progress on explaining the success of adaptive gradient methods for deep learning. A prize of 500 dollars is offered to the winner.