Learning to Learn without Gradient Descent by Gradient Descent

Yutian Chen, Matthew W. Hoffman, Sergio Gómez Colmenarejo, Misha Denil, Timothy P. Lillicrap, Matt Botvinick, Nando Freitas
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:748-756, 2017.

Abstract

We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-chen17e, title = {Learning to Learn without Gradient Descent by Gradient Descent}, author = {Yutian Chen and Matthew W. Hoffman and Sergio G{\'o}mez Colmenarejo and Misha Denil and Timothy P. Lillicrap and Matt Botvinick and Nando de Freitas}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {748--756}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/chen17e/chen17e.pdf}, url = {https://proceedings.mlr.press/v70/chen17e.html}, abstract = {We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.} }
Endnote
%0 Conference Paper %T Learning to Learn without Gradient Descent by Gradient Descent %A Yutian Chen %A Matthew W. Hoffman %A Sergio Gómez Colmenarejo %A Misha Denil %A Timothy P. Lillicrap %A Matt Botvinick %A Nando Freitas %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-chen17e %I PMLR %P 748--756 %U https://proceedings.mlr.press/v70/chen17e.html %V 70 %X We learn recurrent neural network optimizers trained on simple synthetic functions by gradient descent. We show that these learned optimizers exhibit a remarkable degree of transfer in that they can be used to efficiently optimize a broad range of derivative-free black-box functions, including Gaussian process bandits, simple control objectives, global optimization benchmarks and hyper-parameter tuning tasks. Up to the training horizon, the learned optimizers learn to trade-off exploration and exploitation, and compare favourably with heavily engineered Bayesian optimization packages for hyper-parameter tuning.
APA
Chen, Y., Hoffman, M.W., Colmenarejo, S.G., Denil, M., Lillicrap, T.P., Botvinick, M. & Freitas, N.. (2017). Learning to Learn without Gradient Descent by Gradient Descent. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:748-756 Available from https://proceedings.mlr.press/v70/chen17e.html.

Related Material