Forward and Reverse Gradient-Based Hyperparameter Optimization

Luca Franceschi, Michele Donini, Paolo Frasconi, Massimiliano Pontil
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1165-1173, 2017.

Abstract

We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm such as stochastic gradient descent. These procedures mirror two ways of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. Our formulation of the reverse-mode procedure is linked to previous work by Maclaurin et al (2015) but does not require reversible dynamics. Additionally, we explore the use of constraints on the hyperparameters. The forward-mode procedure is suitable for real-time hyperparameter updates, which may significantly speedup hyperparameter optimization on large datasets. We present a series of experiments on image and phone classification tasks. In the second task, previous gradient-based approaches are prohibitive. We show that our real-time algorithm yields state-of-the-art results in affordable time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-franceschi17a, title = {Forward and Reverse Gradient-Based Hyperparameter Optimization}, author = {Luca Franceschi and Michele Donini and Paolo Frasconi and Massimiliano Pontil}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1165--1173}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/franceschi17a/franceschi17a.pdf}, url = {https://proceedings.mlr.press/v70/franceschi17a.html}, abstract = {We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm such as stochastic gradient descent. These procedures mirror two ways of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. Our formulation of the reverse-mode procedure is linked to previous work by Maclaurin et al (2015) but does not require reversible dynamics. Additionally, we explore the use of constraints on the hyperparameters. The forward-mode procedure is suitable for real-time hyperparameter updates, which may significantly speedup hyperparameter optimization on large datasets. We present a series of experiments on image and phone classification tasks. In the second task, previous gradient-based approaches are prohibitive. We show that our real-time algorithm yields state-of-the-art results in affordable time.} }
Endnote
%0 Conference Paper %T Forward and Reverse Gradient-Based Hyperparameter Optimization %A Luca Franceschi %A Michele Donini %A Paolo Frasconi %A Massimiliano Pontil %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-franceschi17a %I PMLR %P 1165--1173 %U https://proceedings.mlr.press/v70/franceschi17a.html %V 70 %X We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm such as stochastic gradient descent. These procedures mirror two ways of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. Our formulation of the reverse-mode procedure is linked to previous work by Maclaurin et al (2015) but does not require reversible dynamics. Additionally, we explore the use of constraints on the hyperparameters. The forward-mode procedure is suitable for real-time hyperparameter updates, which may significantly speedup hyperparameter optimization on large datasets. We present a series of experiments on image and phone classification tasks. In the second task, previous gradient-based approaches are prohibitive. We show that our real-time algorithm yields state-of-the-art results in affordable time.
APA
Franceschi, L., Donini, M., Frasconi, P. & Pontil, M.. (2017). Forward and Reverse Gradient-Based Hyperparameter Optimization. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1165-1173 Available from https://proceedings.mlr.press/v70/franceschi17a.html.

Related Material