A stopping criterion for Bayesian optimization by the gap of expected minimum simple regrets

Hideaki Ishibashi, Masayuki Karasuyama, Ichiro Takeuchi, Hideitsu Hino
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:6463-6497, 2023.

Abstract

Bayesian optimization (BO) improves the efficiency of black-box optimization; however, the associated computational cost and power consumption remain dominant in the application of machine learning methods. This paper proposes a method of determining the stopping time in BO. The proposed criterion is based on the difference between the expectation of the minimum of a variant of the simple regrets before and after evaluating the objective function with a new parameter setting. Unlike existing stopping criteria, the proposed criterion is guaranteed to converge to the theoretically optimal stopping criterion for any choices of arbitrary acquisition functions and threshold values. Moreover, the threshold for the stopping criterion can be determined automatically and adaptively. We experimentally demonstrate that the proposed stopping criterion finds reasonable timing to stop a BO with a small number of evaluations of the objective function.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-ishibashi23a, title = {A stopping criterion for Bayesian optimization by the gap of expected minimum simple regrets}, author = {Ishibashi, Hideaki and Karasuyama, Masayuki and Takeuchi, Ichiro and Hino, Hideitsu}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {6463--6497}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/ishibashi23a/ishibashi23a.pdf}, url = {https://proceedings.mlr.press/v206/ishibashi23a.html}, abstract = {Bayesian optimization (BO) improves the efficiency of black-box optimization; however, the associated computational cost and power consumption remain dominant in the application of machine learning methods. This paper proposes a method of determining the stopping time in BO. The proposed criterion is based on the difference between the expectation of the minimum of a variant of the simple regrets before and after evaluating the objective function with a new parameter setting. Unlike existing stopping criteria, the proposed criterion is guaranteed to converge to the theoretically optimal stopping criterion for any choices of arbitrary acquisition functions and threshold values. Moreover, the threshold for the stopping criterion can be determined automatically and adaptively. We experimentally demonstrate that the proposed stopping criterion finds reasonable timing to stop a BO with a small number of evaluations of the objective function.} }
Endnote
%0 Conference Paper %T A stopping criterion for Bayesian optimization by the gap of expected minimum simple regrets %A Hideaki Ishibashi %A Masayuki Karasuyama %A Ichiro Takeuchi %A Hideitsu Hino %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-ishibashi23a %I PMLR %P 6463--6497 %U https://proceedings.mlr.press/v206/ishibashi23a.html %V 206 %X Bayesian optimization (BO) improves the efficiency of black-box optimization; however, the associated computational cost and power consumption remain dominant in the application of machine learning methods. This paper proposes a method of determining the stopping time in BO. The proposed criterion is based on the difference between the expectation of the minimum of a variant of the simple regrets before and after evaluating the objective function with a new parameter setting. Unlike existing stopping criteria, the proposed criterion is guaranteed to converge to the theoretically optimal stopping criterion for any choices of arbitrary acquisition functions and threshold values. Moreover, the threshold for the stopping criterion can be determined automatically and adaptively. We experimentally demonstrate that the proposed stopping criterion finds reasonable timing to stop a BO with a small number of evaluations of the objective function.
APA
Ishibashi, H., Karasuyama, M., Takeuchi, I. & Hino, H.. (2023). A stopping criterion for Bayesian optimization by the gap of expected minimum simple regrets. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:6463-6497 Available from https://proceedings.mlr.press/v206/ishibashi23a.html.

Related Material