Optimizer Benchmarking Needs to Account for Hyperparameter Tuning

Prabhu Teja Sivaprasad, Florian Mai, Thijs Vogels, Martin Jaggi, François Fleuret
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:9036-9045, 2020.

Abstract

The performance of optimizers, particularly in deep learning, depends considerably on their chosen hyperparameter configuration. The efficacy of optimizers is often studied under near-optimal problem-specific hyperparameters, and finding these settings may be prohibitively costly for practitioners. In this work, we argue that a fair assessment of optimizers’ performance must take the computational cost of hyperparameter tuning into account, i.e., how easy it is to find good hyperparameter configurations using an automatic hyperparameter search. Evaluating a variety of optimizers on an extensive set of standard datasets and architectures, our results indicate that Adam is the most practical solution, particularly in low-budget scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-sivaprasad20a, title = {Optimizer Benchmarking Needs to Account for Hyperparameter Tuning}, author = {Sivaprasad, Prabhu Teja and Mai, Florian and Vogels, Thijs and Jaggi, Martin and Fleuret, Fran{\c{c}}ois}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {9036--9045}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/sivaprasad20a/sivaprasad20a.pdf}, url = {https://proceedings.mlr.press/v119/sivaprasad20a.html}, abstract = {The performance of optimizers, particularly in deep learning, depends considerably on their chosen hyperparameter configuration. The efficacy of optimizers is often studied under near-optimal problem-specific hyperparameters, and finding these settings may be prohibitively costly for practitioners. In this work, we argue that a fair assessment of optimizers’ performance must take the computational cost of hyperparameter tuning into account, i.e., how easy it is to find good hyperparameter configurations using an automatic hyperparameter search. Evaluating a variety of optimizers on an extensive set of standard datasets and architectures, our results indicate that Adam is the most practical solution, particularly in low-budget scenarios.} }
Endnote
%0 Conference Paper %T Optimizer Benchmarking Needs to Account for Hyperparameter Tuning %A Prabhu Teja Sivaprasad %A Florian Mai %A Thijs Vogels %A Martin Jaggi %A François Fleuret %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-sivaprasad20a %I PMLR %P 9036--9045 %U https://proceedings.mlr.press/v119/sivaprasad20a.html %V 119 %X The performance of optimizers, particularly in deep learning, depends considerably on their chosen hyperparameter configuration. The efficacy of optimizers is often studied under near-optimal problem-specific hyperparameters, and finding these settings may be prohibitively costly for practitioners. In this work, we argue that a fair assessment of optimizers’ performance must take the computational cost of hyperparameter tuning into account, i.e., how easy it is to find good hyperparameter configurations using an automatic hyperparameter search. Evaluating a variety of optimizers on an extensive set of standard datasets and architectures, our results indicate that Adam is the most practical solution, particularly in low-budget scenarios.
APA
Sivaprasad, P.T., Mai, F., Vogels, T., Jaggi, M. & Fleuret, F.. (2020). Optimizer Benchmarking Needs to Account for Hyperparameter Tuning. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:9036-9045 Available from https://proceedings.mlr.press/v119/sivaprasad20a.html.

Related Material