Gradient-based Hyperparameter Optimization through Reversible Learning

Dougal Maclaurin, David Duvenaud, Ryan Adams
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:2113-2122, 2015.

Abstract

Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-maclaurin15, title = {Gradient-based Hyperparameter Optimization through Reversible Learning}, author = {Maclaurin, Dougal and Duvenaud, David and Adams, Ryan}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {2113--2122}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/maclaurin15.pdf}, url = {https://proceedings.mlr.press/v37/maclaurin15.html}, abstract = {Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum.} }
Endnote
%0 Conference Paper %T Gradient-based Hyperparameter Optimization through Reversible Learning %A Dougal Maclaurin %A David Duvenaud %A Ryan Adams %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-maclaurin15 %I PMLR %P 2113--2122 %U https://proceedings.mlr.press/v37/maclaurin15.html %V 37 %X Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum.
RIS
TY - CPAPER TI - Gradient-based Hyperparameter Optimization through Reversible Learning AU - Dougal Maclaurin AU - David Duvenaud AU - Ryan Adams BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-maclaurin15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 2113 EP - 2122 L1 - http://proceedings.mlr.press/v37/maclaurin15.pdf UR - https://proceedings.mlr.press/v37/maclaurin15.html AB - Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable. We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum. ER -
APA
Maclaurin, D., Duvenaud, D. & Adams, R.. (2015). Gradient-based Hyperparameter Optimization through Reversible Learning. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:2113-2122 Available from https://proceedings.mlr.press/v37/maclaurin15.html.

Related Material