Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization

[edit]

Zhenxun Zhuang, Ashok Cutkosky, Francesco Orabona ;
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:7664-7672, 2019.

Abstract

Stochastic Gradient Descent (SGD) has played a central role in machine learning. However, it requires a carefully hand-picked stepsize for fast convergence, which is notoriously tedious and time-consuming to tune. Over the last several years, a plethora of adaptive gradient-based algorithms have emerged to ameliorate this problem. In this paper, we propose new surrogate losses to cast the problem of learning the optimal stepsizes for the stochastic optimization of a non-convex smooth objective function onto an online convex optimization problem. This allows the use of no-regret online algorithms to compute optimal stepsizes on the fly. In turn, this results in a SGD algorithm with self-tuned stepsizes that guarantees convergence rates that are automatically adaptive to the level of noise.

Related Material