Follow the Moving Leader in Deep Learning

Shuai Zheng, James T. Kwok
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:4110-4119, 2017.

Abstract

Deep networks are highly nonlinear and difficult to optimize. During training, the parameter iterate may move from one local basin to another, or the data distribution may even change. Inspired by the close connection between stochastic optimization and online learning, we propose a variant of the follow the regularized leader (FTRL) algorithm called follow the moving leader (FTML). Unlike the FTRL family of algorithms, the recent samples are weighted more heavily in each iteration and so FTML can adapt more quickly to changes. We show that FTML enjoys the nice properties of RMSprop and Adam, while avoiding their pitfalls. Experimental results on a number of deep learning models and tasks demonstrate that FTML converges quickly, and outperforms other state-of-the-art optimizers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-zheng17a, title = {Follow the Moving Leader in Deep Learning}, author = {Shuai Zheng and James T. Kwok}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {4110--4119}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/zheng17a/zheng17a.pdf}, url = { http://proceedings.mlr.press/v70/zheng17a.html }, abstract = {Deep networks are highly nonlinear and difficult to optimize. During training, the parameter iterate may move from one local basin to another, or the data distribution may even change. Inspired by the close connection between stochastic optimization and online learning, we propose a variant of the follow the regularized leader (FTRL) algorithm called follow the moving leader (FTML). Unlike the FTRL family of algorithms, the recent samples are weighted more heavily in each iteration and so FTML can adapt more quickly to changes. We show that FTML enjoys the nice properties of RMSprop and Adam, while avoiding their pitfalls. Experimental results on a number of deep learning models and tasks demonstrate that FTML converges quickly, and outperforms other state-of-the-art optimizers.} }
Endnote
%0 Conference Paper %T Follow the Moving Leader in Deep Learning %A Shuai Zheng %A James T. Kwok %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-zheng17a %I PMLR %P 4110--4119 %U http://proceedings.mlr.press/v70/zheng17a.html %V 70 %X Deep networks are highly nonlinear and difficult to optimize. During training, the parameter iterate may move from one local basin to another, or the data distribution may even change. Inspired by the close connection between stochastic optimization and online learning, we propose a variant of the follow the regularized leader (FTRL) algorithm called follow the moving leader (FTML). Unlike the FTRL family of algorithms, the recent samples are weighted more heavily in each iteration and so FTML can adapt more quickly to changes. We show that FTML enjoys the nice properties of RMSprop and Adam, while avoiding their pitfalls. Experimental results on a number of deep learning models and tasks demonstrate that FTML converges quickly, and outperforms other state-of-the-art optimizers.
APA
Zheng, S. & Kwok, J.T.. (2017). Follow the Moving Leader in Deep Learning. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:4110-4119 Available from http://proceedings.mlr.press/v70/zheng17a.html .

Related Material