Fast Benchmarking of Asynchronous Multi-Fidelity Optimization on Zero-Cost Benchmarks

Shuhei Watanabe, Neeratyoy Mallik, Edward Bergman, Frank Hutter
Proceedings of the Third International Conference on Automated Machine Learning, PMLR 256:14/1-18, 2024.

Abstract

While deep learning has celebrated many successes, its results often hinge on the meticulous selection of hyperparameters (HPs). However, the time-consuming nature of deep learning training makes HP optimization (HPO) a costly endeavor, slowing down the development of efficient HPO tools. While zero-cost benchmarks, which provide performance and runtime without actual training, offer a solution for non-parallel setups, they fall short in parallel setups as each worker must communicate its queried runtime to return its evaluation in the exact order. This work addresses this challenge by introducing a user-friendly Python package that facilitates efficient parallel HPO with zero-cost benchmarks. Our approach calculates the exact return order based on the information stored in file system, eliminating the need for long waiting times and enabling much faster HPO evaluations. We first verify the correctness of our approach through extensive testing and the experiments with 6 popular HPO libraries show its applicability to diverse libraries and its ability to achieve over 1000$\times$ speedup compared to a traditional approach. Our package can be installed via pip install mfhpo-simulator.

Cite this Paper


BibTeX
@InProceedings{pmlr-v256-watanabe24a, title = {Fast Benchmarking of Asynchronous Multi-Fidelity Optimization on Zero-Cost Benchmarks}, author = {Watanabe, Shuhei and Mallik, Neeratyoy and Bergman, Edward and Hutter, Frank}, booktitle = {Proceedings of the Third International Conference on Automated Machine Learning}, pages = {14/1--18}, year = {2024}, editor = {Eggensperger, Katharina and Garnett, Roman and Vanschoren, Joaquin and Lindauer, Marius and Gardner, Jacob R.}, volume = {256}, series = {Proceedings of Machine Learning Research}, month = {09--12 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v256/main/assets/watanabe24a/watanabe24a.pdf}, url = {https://proceedings.mlr.press/v256/watanabe24a.html}, abstract = {While deep learning has celebrated many successes, its results often hinge on the meticulous selection of hyperparameters (HPs). However, the time-consuming nature of deep learning training makes HP optimization (HPO) a costly endeavor, slowing down the development of efficient HPO tools. While zero-cost benchmarks, which provide performance and runtime without actual training, offer a solution for non-parallel setups, they fall short in parallel setups as each worker must communicate its queried runtime to return its evaluation in the exact order. This work addresses this challenge by introducing a user-friendly Python package that facilitates efficient parallel HPO with zero-cost benchmarks. Our approach calculates the exact return order based on the information stored in file system, eliminating the need for long waiting times and enabling much faster HPO evaluations. We first verify the correctness of our approach through extensive testing and the experiments with 6 popular HPO libraries show its applicability to diverse libraries and its ability to achieve over 1000$\times$ speedup compared to a traditional approach. Our package can be installed via pip install mfhpo-simulator.} }
Endnote
%0 Conference Paper %T Fast Benchmarking of Asynchronous Multi-Fidelity Optimization on Zero-Cost Benchmarks %A Shuhei Watanabe %A Neeratyoy Mallik %A Edward Bergman %A Frank Hutter %B Proceedings of the Third International Conference on Automated Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Katharina Eggensperger %E Roman Garnett %E Joaquin Vanschoren %E Marius Lindauer %E Jacob R. Gardner %F pmlr-v256-watanabe24a %I PMLR %P 14/1--18 %U https://proceedings.mlr.press/v256/watanabe24a.html %V 256 %X While deep learning has celebrated many successes, its results often hinge on the meticulous selection of hyperparameters (HPs). However, the time-consuming nature of deep learning training makes HP optimization (HPO) a costly endeavor, slowing down the development of efficient HPO tools. While zero-cost benchmarks, which provide performance and runtime without actual training, offer a solution for non-parallel setups, they fall short in parallel setups as each worker must communicate its queried runtime to return its evaluation in the exact order. This work addresses this challenge by introducing a user-friendly Python package that facilitates efficient parallel HPO with zero-cost benchmarks. Our approach calculates the exact return order based on the information stored in file system, eliminating the need for long waiting times and enabling much faster HPO evaluations. We first verify the correctness of our approach through extensive testing and the experiments with 6 popular HPO libraries show its applicability to diverse libraries and its ability to achieve over 1000$\times$ speedup compared to a traditional approach. Our package can be installed via pip install mfhpo-simulator.
APA
Watanabe, S., Mallik, N., Bergman, E. & Hutter, F.. (2024). Fast Benchmarking of Asynchronous Multi-Fidelity Optimization on Zero-Cost Benchmarks. Proceedings of the Third International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 256:14/1-18 Available from https://proceedings.mlr.press/v256/watanabe24a.html.

Related Material