On the Optimality Gap of Warm-Started Hyperparameter Optimization

Parikshit Ram
Proceedings of the First International Conference on Automated Machine Learning, PMLR 188:12/1-14, 2022.

Abstract

We study the general framework of warm-started hyperparameter optimization (HPO) where we have some source datasets (tasks) where we have already performed HPO, and we wish to leverage the results of these HPO to warm-start the HPO on an unseen target dataset (and perform few-shot HPO). Various meta-learning schemes have been proposed over the last decade (and more) for this problem. In this paper, we theoretically analyse the optimality gap of the hyperparameter obtained via such warm-started few-shot HPO, and provide novel results for multiple existing meta-learning schemes. We show how these results allow us identify situations where certain schemes have advantage over others.

Cite this Paper


BibTeX
@InProceedings{pmlr-v188-ram22a, title = {On the Optimality Gap of Warm-Started Hyperparameter Optimization}, author = {Ram, Parikshit}, booktitle = {Proceedings of the First International Conference on Automated Machine Learning}, pages = {12/1--14}, year = {2022}, editor = {Guyon, Isabelle and Lindauer, Marius and van der Schaar, Mihaela and Hutter, Frank and Garnett, Roman}, volume = {188}, series = {Proceedings of Machine Learning Research}, month = {25--27 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v188/ram22a/ram22a.pdf}, url = {https://proceedings.mlr.press/v188/ram22a.html}, abstract = {We study the general framework of warm-started hyperparameter optimization (HPO) where we have some source datasets (tasks) where we have already performed HPO, and we wish to leverage the results of these HPO to warm-start the HPO on an unseen target dataset (and perform few-shot HPO). Various meta-learning schemes have been proposed over the last decade (and more) for this problem. In this paper, we theoretically analyse the optimality gap of the hyperparameter obtained via such warm-started few-shot HPO, and provide novel results for multiple existing meta-learning schemes. We show how these results allow us identify situations where certain schemes have advantage over others.} }
Endnote
%0 Conference Paper %T On the Optimality Gap of Warm-Started Hyperparameter Optimization %A Parikshit Ram %B Proceedings of the First International Conference on Automated Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Isabelle Guyon %E Marius Lindauer %E Mihaela van der Schaar %E Frank Hutter %E Roman Garnett %F pmlr-v188-ram22a %I PMLR %P 12/1--14 %U https://proceedings.mlr.press/v188/ram22a.html %V 188 %X We study the general framework of warm-started hyperparameter optimization (HPO) where we have some source datasets (tasks) where we have already performed HPO, and we wish to leverage the results of these HPO to warm-start the HPO on an unseen target dataset (and perform few-shot HPO). Various meta-learning schemes have been proposed over the last decade (and more) for this problem. In this paper, we theoretically analyse the optimality gap of the hyperparameter obtained via such warm-started few-shot HPO, and provide novel results for multiple existing meta-learning schemes. We show how these results allow us identify situations where certain schemes have advantage over others.
APA
Ram, P.. (2022). On the Optimality Gap of Warm-Started Hyperparameter Optimization. Proceedings of the First International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 188:12/1-14 Available from https://proceedings.mlr.press/v188/ram22a.html.

Related Material