Meta-learning Hyperparameter Performance Prediction with Neural Processes

Ying Wei, Peilin Zhao, Junzhou Huang
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:11058-11067, 2021.

Abstract

The surrogate that predicts the performance of hyperparameters has been a key component for sequential model-based hyperparameter optimization. In practical applications, a trial of a hyper-parameter configuration may be so costly that a surrogate is expected to return an optimal configuration with as few trials as possible. Observing that human experts draw on their expertise in a machine learning model by trying configurations that once performed well on other datasets, we are inspired to build a trial-efficient surrogate by transferring the meta-knowledge learned from historical trials on other datasets. We propose an end-to-end surrogate named as Transfer NeuralProcesses (TNP) that learns a comprehensive set of meta-knowledge, including the parameters of historical surrogates, historical trials, and initial configurations for other datasets. Experiments on extensive OpenML datasets and three computer vision datasets demonstrate that the proposed algorithm achieves state-of-the-art performance in at least one order of magnitude less trials.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-wei21c, title = {Meta-learning Hyperparameter Performance Prediction with Neural Processes}, author = {Wei, Ying and Zhao, Peilin and Huang, Junzhou}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {11058--11067}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/wei21c/wei21c.pdf}, url = {https://proceedings.mlr.press/v139/wei21c.html}, abstract = {The surrogate that predicts the performance of hyperparameters has been a key component for sequential model-based hyperparameter optimization. In practical applications, a trial of a hyper-parameter configuration may be so costly that a surrogate is expected to return an optimal configuration with as few trials as possible. Observing that human experts draw on their expertise in a machine learning model by trying configurations that once performed well on other datasets, we are inspired to build a trial-efficient surrogate by transferring the meta-knowledge learned from historical trials on other datasets. We propose an end-to-end surrogate named as Transfer NeuralProcesses (TNP) that learns a comprehensive set of meta-knowledge, including the parameters of historical surrogates, historical trials, and initial configurations for other datasets. Experiments on extensive OpenML datasets and three computer vision datasets demonstrate that the proposed algorithm achieves state-of-the-art performance in at least one order of magnitude less trials.} }
Endnote
%0 Conference Paper %T Meta-learning Hyperparameter Performance Prediction with Neural Processes %A Ying Wei %A Peilin Zhao %A Junzhou Huang %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-wei21c %I PMLR %P 11058--11067 %U https://proceedings.mlr.press/v139/wei21c.html %V 139 %X The surrogate that predicts the performance of hyperparameters has been a key component for sequential model-based hyperparameter optimization. In practical applications, a trial of a hyper-parameter configuration may be so costly that a surrogate is expected to return an optimal configuration with as few trials as possible. Observing that human experts draw on their expertise in a machine learning model by trying configurations that once performed well on other datasets, we are inspired to build a trial-efficient surrogate by transferring the meta-knowledge learned from historical trials on other datasets. We propose an end-to-end surrogate named as Transfer NeuralProcesses (TNP) that learns a comprehensive set of meta-knowledge, including the parameters of historical surrogates, historical trials, and initial configurations for other datasets. Experiments on extensive OpenML datasets and three computer vision datasets demonstrate that the proposed algorithm achieves state-of-the-art performance in at least one order of magnitude less trials.
APA
Wei, Y., Zhao, P. & Huang, J.. (2021). Meta-learning Hyperparameter Performance Prediction with Neural Processes. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:11058-11067 Available from https://proceedings.mlr.press/v139/wei21c.html.

Related Material