Learning Gradient Descent: Better Generalization and Longer Horizons

Kaifeng Lv, Shunhua Jiang, Jian Li
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:2247-2255, 2017.

Abstract

Training deep neural networks is a highly nontrivial task, involving carefully selecting appropriate training algorithms, scheduling step sizes and tuning other hyperparameters. Trying different combinations can be quite labor-intensive and time consuming. Recently, researchers have tried to use deep learning algorithms to exploit the landscape of the loss function of the training problem of interest, and learn how to optimize over it in an automatic way. In this paper, we propose a new learning-to-learn model and some useful and practical tricks. Our optimizer outperforms generic, hand-crafted optimization algorithms and state-of-the-art learning-to-learn optimizers by DeepMind in many tasks. We demonstrate the effectiveness of our algorithms on a number of tasks, including deep MLPs, CNNs, and simple LSTMs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-lv17a, title = {Learning Gradient Descent: Better Generalization and Longer Horizons}, author = {Kaifeng Lv and Shunhua Jiang and Jian Li}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {2247--2255}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/lv17a/lv17a.pdf}, url = {https://proceedings.mlr.press/v70/lv17a.html}, abstract = {Training deep neural networks is a highly nontrivial task, involving carefully selecting appropriate training algorithms, scheduling step sizes and tuning other hyperparameters. Trying different combinations can be quite labor-intensive and time consuming. Recently, researchers have tried to use deep learning algorithms to exploit the landscape of the loss function of the training problem of interest, and learn how to optimize over it in an automatic way. In this paper, we propose a new learning-to-learn model and some useful and practical tricks. Our optimizer outperforms generic, hand-crafted optimization algorithms and state-of-the-art learning-to-learn optimizers by DeepMind in many tasks. We demonstrate the effectiveness of our algorithms on a number of tasks, including deep MLPs, CNNs, and simple LSTMs.} }
Endnote
%0 Conference Paper %T Learning Gradient Descent: Better Generalization and Longer Horizons %A Kaifeng Lv %A Shunhua Jiang %A Jian Li %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-lv17a %I PMLR %P 2247--2255 %U https://proceedings.mlr.press/v70/lv17a.html %V 70 %X Training deep neural networks is a highly nontrivial task, involving carefully selecting appropriate training algorithms, scheduling step sizes and tuning other hyperparameters. Trying different combinations can be quite labor-intensive and time consuming. Recently, researchers have tried to use deep learning algorithms to exploit the landscape of the loss function of the training problem of interest, and learn how to optimize over it in an automatic way. In this paper, we propose a new learning-to-learn model and some useful and practical tricks. Our optimizer outperforms generic, hand-crafted optimization algorithms and state-of-the-art learning-to-learn optimizers by DeepMind in many tasks. We demonstrate the effectiveness of our algorithms on a number of tasks, including deep MLPs, CNNs, and simple LSTMs.
APA
Lv, K., Jiang, S. & Li, J.. (2017). Learning Gradient Descent: Better Generalization and Longer Horizons. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:2247-2255 Available from https://proceedings.mlr.press/v70/lv17a.html.

Related Material