Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization

Zhenxun Zhuang, Ashok Cutkosky, Francesco Orabona
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:7664-7672, 2019.

Abstract

Stochastic Gradient Descent (SGD) has played a central role in machine learning. However, it requires a carefully hand-picked stepsize for fast convergence, which is notoriously tedious and time-consuming to tune. Over the last several years, a plethora of adaptive gradient-based algorithms have emerged to ameliorate this problem. In this paper, we propose new surrogate losses to cast the problem of learning the optimal stepsizes for the stochastic optimization of a non-convex smooth objective function onto an online convex optimization problem. This allows the use of no-regret online algorithms to compute optimal stepsizes on the fly. In turn, this results in a SGD algorithm with self-tuned stepsizes that guarantees convergence rates that are automatically adaptive to the level of noise.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-zhuang19a, title = {Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization}, author = {Zhuang, Zhenxun and Cutkosky, Ashok and Orabona, Francesco}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {7664--7672}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/zhuang19a/zhuang19a.pdf}, url = {https://proceedings.mlr.press/v97/zhuang19a.html}, abstract = {Stochastic Gradient Descent (SGD) has played a central role in machine learning. However, it requires a carefully hand-picked stepsize for fast convergence, which is notoriously tedious and time-consuming to tune. Over the last several years, a plethora of adaptive gradient-based algorithms have emerged to ameliorate this problem. In this paper, we propose new surrogate losses to cast the problem of learning the optimal stepsizes for the stochastic optimization of a non-convex smooth objective function onto an online convex optimization problem. This allows the use of no-regret online algorithms to compute optimal stepsizes on the fly. In turn, this results in a SGD algorithm with self-tuned stepsizes that guarantees convergence rates that are automatically adaptive to the level of noise.} }
Endnote
%0 Conference Paper %T Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization %A Zhenxun Zhuang %A Ashok Cutkosky %A Francesco Orabona %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-zhuang19a %I PMLR %P 7664--7672 %U https://proceedings.mlr.press/v97/zhuang19a.html %V 97 %X Stochastic Gradient Descent (SGD) has played a central role in machine learning. However, it requires a carefully hand-picked stepsize for fast convergence, which is notoriously tedious and time-consuming to tune. Over the last several years, a plethora of adaptive gradient-based algorithms have emerged to ameliorate this problem. In this paper, we propose new surrogate losses to cast the problem of learning the optimal stepsizes for the stochastic optimization of a non-convex smooth objective function onto an online convex optimization problem. This allows the use of no-regret online algorithms to compute optimal stepsizes on the fly. In turn, this results in a SGD algorithm with self-tuned stepsizes that guarantees convergence rates that are automatically adaptive to the level of noise.
APA
Zhuang, Z., Cutkosky, A. & Orabona, F.. (2019). Surrogate Losses for Online Learning of Stepsizes in Stochastic Non-Convex Optimization. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:7664-7672 Available from https://proceedings.mlr.press/v97/zhuang19a.html.

Related Material