Efficient Transfer Learning Method for Automatic Hyperparameter Tuning

Dani Yogatama, Gideon Mann
Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:1077-1085, 2014.

Abstract

We propose a fast and effective algorithm for automatic hyperparameter tuning that can generalize across datasets. Our method is an instance of sequential model-based optimization (SMBO) that transfers information by constructing a common response surface for all datasets, similar to Bardenet et al. (2013). The time complexity of reconstructing the response surface at every SMBO iteration in our method is linear in the number of trials (significantly less than previous work with comparable performance), allowing the method to realistically scale to many more datasets. Specifically, we use deviations from the per-dataset mean as the response values. We empirically show the superiority of our method on a large number of synthetic and real-world datasets for tuning hyperparameters of logistic regression and ensembles of classifiers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v33-yogatama14, title = {{Efficient Transfer Learning Method for Automatic Hyperparameter Tuning}}, author = {Yogatama, Dani and Mann, Gideon}, booktitle = {Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics}, pages = {1077--1085}, year = {2014}, editor = {Kaski, Samuel and Corander, Jukka}, volume = {33}, series = {Proceedings of Machine Learning Research}, address = {Reykjavik, Iceland}, month = {22--25 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v33/yogatama14.pdf}, url = {https://proceedings.mlr.press/v33/yogatama14.html}, abstract = {We propose a fast and effective algorithm for automatic hyperparameter tuning that can generalize across datasets. Our method is an instance of sequential model-based optimization (SMBO) that transfers information by constructing a common response surface for all datasets, similar to Bardenet et al. (2013). The time complexity of reconstructing the response surface at every SMBO iteration in our method is linear in the number of trials (significantly less than previous work with comparable performance), allowing the method to realistically scale to many more datasets. Specifically, we use deviations from the per-dataset mean as the response values. We empirically show the superiority of our method on a large number of synthetic and real-world datasets for tuning hyperparameters of logistic regression and ensembles of classifiers.} }
Endnote
%0 Conference Paper %T Efficient Transfer Learning Method for Automatic Hyperparameter Tuning %A Dani Yogatama %A Gideon Mann %B Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2014 %E Samuel Kaski %E Jukka Corander %F pmlr-v33-yogatama14 %I PMLR %P 1077--1085 %U https://proceedings.mlr.press/v33/yogatama14.html %V 33 %X We propose a fast and effective algorithm for automatic hyperparameter tuning that can generalize across datasets. Our method is an instance of sequential model-based optimization (SMBO) that transfers information by constructing a common response surface for all datasets, similar to Bardenet et al. (2013). The time complexity of reconstructing the response surface at every SMBO iteration in our method is linear in the number of trials (significantly less than previous work with comparable performance), allowing the method to realistically scale to many more datasets. Specifically, we use deviations from the per-dataset mean as the response values. We empirically show the superiority of our method on a large number of synthetic and real-world datasets for tuning hyperparameters of logistic regression and ensembles of classifiers.
RIS
TY - CPAPER TI - Efficient Transfer Learning Method for Automatic Hyperparameter Tuning AU - Dani Yogatama AU - Gideon Mann BT - Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics DA - 2014/04/02 ED - Samuel Kaski ED - Jukka Corander ID - pmlr-v33-yogatama14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 33 SP - 1077 EP - 1085 L1 - http://proceedings.mlr.press/v33/yogatama14.pdf UR - https://proceedings.mlr.press/v33/yogatama14.html AB - We propose a fast and effective algorithm for automatic hyperparameter tuning that can generalize across datasets. Our method is an instance of sequential model-based optimization (SMBO) that transfers information by constructing a common response surface for all datasets, similar to Bardenet et al. (2013). The time complexity of reconstructing the response surface at every SMBO iteration in our method is linear in the number of trials (significantly less than previous work with comparable performance), allowing the method to realistically scale to many more datasets. Specifically, we use deviations from the per-dataset mean as the response values. We empirically show the superiority of our method on a large number of synthetic and real-world datasets for tuning hyperparameters of logistic regression and ensembles of classifiers. ER -
APA
Yogatama, D. & Mann, G.. (2014). Efficient Transfer Learning Method for Automatic Hyperparameter Tuning. Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 33:1077-1085 Available from https://proceedings.mlr.press/v33/yogatama14.html.

Related Material