Random Exploration in Bayesian Optimization: Order-Optimal Regret and Computational Efficiency

Sudeep Salgia, Sattar Vakili, Qing Zhao
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:43112-43141, 2024.

Abstract

We consider Bayesian optimization using Gaussian Process models, also referred to as kernel-based bandit optimization. We study the methodology of exploring the domain using random samples drawn from a distribution. We show that this random exploration approach achieves the optimal error rates. Our analysis is based on novel concentration bounds in an infinite dimensional Hilbert space established in this work, which may be of independent interest. We further develop an algorithm based on random exploration with domain shrinking and establish its order-optimal regret guarantees under both noise-free and noisy settings. In the noise-free setting, our analysis closes the existing gap in regret performance under a mild assumption on the underlying function and thereby partially resolves a COLT open problem. The proposed algorithm also enjoys a computational advantage over prevailing methods due to the random exploration that obviates the expensive optimization of a non-convex acquisition function for choosing the query points at each iteration.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-salgia24a, title = {Random Exploration in {B}ayesian Optimization: Order-Optimal Regret and Computational Efficiency}, author = {Salgia, Sudeep and Vakili, Sattar and Zhao, Qing}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {43112--43141}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/salgia24a/salgia24a.pdf}, url = {https://proceedings.mlr.press/v235/salgia24a.html}, abstract = {We consider Bayesian optimization using Gaussian Process models, also referred to as kernel-based bandit optimization. We study the methodology of exploring the domain using random samples drawn from a distribution. We show that this random exploration approach achieves the optimal error rates. Our analysis is based on novel concentration bounds in an infinite dimensional Hilbert space established in this work, which may be of independent interest. We further develop an algorithm based on random exploration with domain shrinking and establish its order-optimal regret guarantees under both noise-free and noisy settings. In the noise-free setting, our analysis closes the existing gap in regret performance under a mild assumption on the underlying function and thereby partially resolves a COLT open problem. The proposed algorithm also enjoys a computational advantage over prevailing methods due to the random exploration that obviates the expensive optimization of a non-convex acquisition function for choosing the query points at each iteration.} }
Endnote
%0 Conference Paper %T Random Exploration in Bayesian Optimization: Order-Optimal Regret and Computational Efficiency %A Sudeep Salgia %A Sattar Vakili %A Qing Zhao %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-salgia24a %I PMLR %P 43112--43141 %U https://proceedings.mlr.press/v235/salgia24a.html %V 235 %X We consider Bayesian optimization using Gaussian Process models, also referred to as kernel-based bandit optimization. We study the methodology of exploring the domain using random samples drawn from a distribution. We show that this random exploration approach achieves the optimal error rates. Our analysis is based on novel concentration bounds in an infinite dimensional Hilbert space established in this work, which may be of independent interest. We further develop an algorithm based on random exploration with domain shrinking and establish its order-optimal regret guarantees under both noise-free and noisy settings. In the noise-free setting, our analysis closes the existing gap in regret performance under a mild assumption on the underlying function and thereby partially resolves a COLT open problem. The proposed algorithm also enjoys a computational advantage over prevailing methods due to the random exploration that obviates the expensive optimization of a non-convex acquisition function for choosing the query points at each iteration.
APA
Salgia, S., Vakili, S. & Zhao, Q.. (2024). Random Exploration in Bayesian Optimization: Order-Optimal Regret and Computational Efficiency. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:43112-43141 Available from https://proceedings.mlr.press/v235/salgia24a.html.

Related Material