Bilevel Programming for Hyperparameter Optimization and Meta-Learning

Luca Franceschi, Paolo Frasconi, Saverio Salzo, Riccardo Grazzi, Massimiliano Pontil
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:1568-1577, 2018.

Abstract

We introduce a framework based on bilevel programming that unifies gradient-based hyperparameter optimization and meta-learning. We show that an approximate version of the bilevel problem can be solved by taking into explicit account the optimization dynamics for the inner objective. Depending on the specific setting, the outer variables take either the meaning of hyperparameters in a supervised learning problem or parameters of a meta-learner. We provide sufficient conditions under which solutions of the approximate problem converge to those of the exact problem. We instantiate our approach for meta-learning in the case of deep learning where representation layers are treated as hyperparameters shared across a set of training episodes. In experiments, we confirm our theoretical findings, present encouraging results for few-shot learning and contrast the bilevel approach against classical approaches for learning-to-learn.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-franceschi18a, title = {Bilevel Programming for Hyperparameter Optimization and Meta-Learning}, author = {Franceschi, Luca and Frasconi, Paolo and Salzo, Saverio and Grazzi, Riccardo and Pontil, Massimiliano}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {1568--1577}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/franceschi18a/franceschi18a.pdf}, url = {https://proceedings.mlr.press/v80/franceschi18a.html}, abstract = {We introduce a framework based on bilevel programming that unifies gradient-based hyperparameter optimization and meta-learning. We show that an approximate version of the bilevel problem can be solved by taking into explicit account the optimization dynamics for the inner objective. Depending on the specific setting, the outer variables take either the meaning of hyperparameters in a supervised learning problem or parameters of a meta-learner. We provide sufficient conditions under which solutions of the approximate problem converge to those of the exact problem. We instantiate our approach for meta-learning in the case of deep learning where representation layers are treated as hyperparameters shared across a set of training episodes. In experiments, we confirm our theoretical findings, present encouraging results for few-shot learning and contrast the bilevel approach against classical approaches for learning-to-learn.} }
Endnote
%0 Conference Paper %T Bilevel Programming for Hyperparameter Optimization and Meta-Learning %A Luca Franceschi %A Paolo Frasconi %A Saverio Salzo %A Riccardo Grazzi %A Massimiliano Pontil %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-franceschi18a %I PMLR %P 1568--1577 %U https://proceedings.mlr.press/v80/franceschi18a.html %V 80 %X We introduce a framework based on bilevel programming that unifies gradient-based hyperparameter optimization and meta-learning. We show that an approximate version of the bilevel problem can be solved by taking into explicit account the optimization dynamics for the inner objective. Depending on the specific setting, the outer variables take either the meaning of hyperparameters in a supervised learning problem or parameters of a meta-learner. We provide sufficient conditions under which solutions of the approximate problem converge to those of the exact problem. We instantiate our approach for meta-learning in the case of deep learning where representation layers are treated as hyperparameters shared across a set of training episodes. In experiments, we confirm our theoretical findings, present encouraging results for few-shot learning and contrast the bilevel approach against classical approaches for learning-to-learn.
APA
Franceschi, L., Frasconi, P., Salzo, S., Grazzi, R. & Pontil, M.. (2018). Bilevel Programming for Hyperparameter Optimization and Meta-Learning. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:1568-1577 Available from https://proceedings.mlr.press/v80/franceschi18a.html.

Related Material