Self-Adjusting Weighted Expected Improvement for Bayesian Optimization

Carolin Benjamins, Elena Raponi, Anja Jankovic, Carola Doerr, Marius Lindauer
Proceedings of the Second International Conference on Automated Machine Learning, PMLR 224:6/1-50, 2023.

Abstract

Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets. The BO pipeline itself is highly configurable with many different design choices regarding the initial design, surrogate model, and acquisition function (AF). Unfortunately, our understanding of how to select suitable components for a problem at hand is very limited. In this work, we focus on the definition of the AF, whose main purpose is to balance the trade-off between exploring regions with high uncertainty and those with high promise for good solutions. We propose Self-Adjusting Weighted Expected Improvement (SAWEI), where we let the exploration-exploitation trade-off self-adjust in a data-driven manner, based on a convergence criterion for BO. On the noise-free black-box BBOB functions of the COCO benchmarking platform, our method exhibits a favorable any-time performance compared to handcrafted baselines and serves as a robust default choice for any problem structure. The suitability of our method also transfers to HPOBench. With SAWEI, we are a step closer to on-the-fly, data-driven, and robust BO designs that automatically adjust their sampling behavior to the problem at hand.

Cite this Paper


BibTeX
@InProceedings{pmlr-v224-benjamins23a, title = {Self-Adjusting Weighted Expected Improvement for Bayesian Optimization}, author = {Benjamins, Carolin and Raponi, Elena and Jankovic, Anja and Doerr, Carola and Lindauer, Marius}, booktitle = {Proceedings of the Second International Conference on Automated Machine Learning}, pages = {6/1--50}, year = {2023}, editor = {Faust, Aleksandra and Garnett, Roman and White, Colin and Hutter, Frank and Gardner, Jacob R.}, volume = {224}, series = {Proceedings of Machine Learning Research}, month = {12--15 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v224/benjamins23a/benjamins23a.pdf}, url = {https://proceedings.mlr.press/v224/benjamins23a.html}, abstract = {Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets. The BO pipeline itself is highly configurable with many different design choices regarding the initial design, surrogate model, and acquisition function (AF). Unfortunately, our understanding of how to select suitable components for a problem at hand is very limited. In this work, we focus on the definition of the AF, whose main purpose is to balance the trade-off between exploring regions with high uncertainty and those with high promise for good solutions. We propose Self-Adjusting Weighted Expected Improvement (SAWEI), where we let the exploration-exploitation trade-off self-adjust in a data-driven manner, based on a convergence criterion for BO. On the noise-free black-box BBOB functions of the COCO benchmarking platform, our method exhibits a favorable any-time performance compared to handcrafted baselines and serves as a robust default choice for any problem structure. The suitability of our method also transfers to HPOBench. With SAWEI, we are a step closer to on-the-fly, data-driven, and robust BO designs that automatically adjust their sampling behavior to the problem at hand.} }
Endnote
%0 Conference Paper %T Self-Adjusting Weighted Expected Improvement for Bayesian Optimization %A Carolin Benjamins %A Elena Raponi %A Anja Jankovic %A Carola Doerr %A Marius Lindauer %B Proceedings of the Second International Conference on Automated Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Aleksandra Faust %E Roman Garnett %E Colin White %E Frank Hutter %E Jacob R. Gardner %F pmlr-v224-benjamins23a %I PMLR %P 6/1--50 %U https://proceedings.mlr.press/v224/benjamins23a.html %V 224 %X Bayesian Optimization (BO) is a class of surrogate-based, sample-efficient algorithms for optimizing black-box problems with small evaluation budgets. The BO pipeline itself is highly configurable with many different design choices regarding the initial design, surrogate model, and acquisition function (AF). Unfortunately, our understanding of how to select suitable components for a problem at hand is very limited. In this work, we focus on the definition of the AF, whose main purpose is to balance the trade-off between exploring regions with high uncertainty and those with high promise for good solutions. We propose Self-Adjusting Weighted Expected Improvement (SAWEI), where we let the exploration-exploitation trade-off self-adjust in a data-driven manner, based on a convergence criterion for BO. On the noise-free black-box BBOB functions of the COCO benchmarking platform, our method exhibits a favorable any-time performance compared to handcrafted baselines and serves as a robust default choice for any problem structure. The suitability of our method also transfers to HPOBench. With SAWEI, we are a step closer to on-the-fly, data-driven, and robust BO designs that automatically adjust their sampling behavior to the problem at hand.
APA
Benjamins, C., Raponi, E., Jankovic, A., Doerr, C. & Lindauer, M.. (2023). Self-Adjusting Weighted Expected Improvement for Bayesian Optimization. Proceedings of the Second International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 224:6/1-50 Available from https://proceedings.mlr.press/v224/benjamins23a.html.

Related Material