Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters

Jelena Luketina, Mathias Berglund, Klaus Greff, Tapani Raiko
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2952-2960, 2016.

Abstract

Hyperparameter selection generally relies on running multiple full training trials, with selection based on validation set performance. We propose a gradient-based approach for locally adjusting hyperparameters during training of the model. Hyperparameters are adjusted so as to make the model parameter gradients, and hence updates, more advantageous for the validation cost. We explore the approach for tuning regularization hyperparameters and find that in experiments on MNIST, SVHN and CIFAR-10, the resulting regularization levels are within the optimal regions. The additional computational cost depends on how frequently the hyperparameters are trained, but the tested scheme adds only 30% computational overhead regardless of the model size. Since the method is significantly less computationally demanding compared to similar gradient-based approaches to hyperparameter optimization, and consistently finds good hyperparameter values, it can be a useful tool for training neural network models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-luketina16, title = {Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters}, author = {Luketina, Jelena and Berglund, Mathias and Greff, Klaus and Raiko, Tapani}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {2952--2960}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/luketina16.pdf}, url = {https://proceedings.mlr.press/v48/luketina16.html}, abstract = {Hyperparameter selection generally relies on running multiple full training trials, with selection based on validation set performance. We propose a gradient-based approach for locally adjusting hyperparameters during training of the model. Hyperparameters are adjusted so as to make the model parameter gradients, and hence updates, more advantageous for the validation cost. We explore the approach for tuning regularization hyperparameters and find that in experiments on MNIST, SVHN and CIFAR-10, the resulting regularization levels are within the optimal regions. The additional computational cost depends on how frequently the hyperparameters are trained, but the tested scheme adds only 30% computational overhead regardless of the model size. Since the method is significantly less computationally demanding compared to similar gradient-based approaches to hyperparameter optimization, and consistently finds good hyperparameter values, it can be a useful tool for training neural network models.} }
Endnote
%0 Conference Paper %T Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters %A Jelena Luketina %A Mathias Berglund %A Klaus Greff %A Tapani Raiko %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-luketina16 %I PMLR %P 2952--2960 %U https://proceedings.mlr.press/v48/luketina16.html %V 48 %X Hyperparameter selection generally relies on running multiple full training trials, with selection based on validation set performance. We propose a gradient-based approach for locally adjusting hyperparameters during training of the model. Hyperparameters are adjusted so as to make the model parameter gradients, and hence updates, more advantageous for the validation cost. We explore the approach for tuning regularization hyperparameters and find that in experiments on MNIST, SVHN and CIFAR-10, the resulting regularization levels are within the optimal regions. The additional computational cost depends on how frequently the hyperparameters are trained, but the tested scheme adds only 30% computational overhead regardless of the model size. Since the method is significantly less computationally demanding compared to similar gradient-based approaches to hyperparameter optimization, and consistently finds good hyperparameter values, it can be a useful tool for training neural network models.
RIS
TY - CPAPER TI - Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters AU - Jelena Luketina AU - Mathias Berglund AU - Klaus Greff AU - Tapani Raiko BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-luketina16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 2952 EP - 2960 L1 - http://proceedings.mlr.press/v48/luketina16.pdf UR - https://proceedings.mlr.press/v48/luketina16.html AB - Hyperparameter selection generally relies on running multiple full training trials, with selection based on validation set performance. We propose a gradient-based approach for locally adjusting hyperparameters during training of the model. Hyperparameters are adjusted so as to make the model parameter gradients, and hence updates, more advantageous for the validation cost. We explore the approach for tuning regularization hyperparameters and find that in experiments on MNIST, SVHN and CIFAR-10, the resulting regularization levels are within the optimal regions. The additional computational cost depends on how frequently the hyperparameters are trained, but the tested scheme adds only 30% computational overhead regardless of the model size. Since the method is significantly less computationally demanding compared to similar gradient-based approaches to hyperparameter optimization, and consistently finds good hyperparameter values, it can be a useful tool for training neural network models. ER -
APA
Luketina, J., Berglund, M., Greff, K. & Raiko, T.. (2016). Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:2952-2960 Available from https://proceedings.mlr.press/v48/luketina16.html.

Related Material