Bayesian Optimization with Inexact Acquisition: Is Random Grid Search Sufficient?

Hwanwoo Kim, Chong Liu, Yuxin Chen
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:2202-2222, 2025.

Abstract

Bayesian optimization (BO) is a widely used iterative algorithm for optimizing black-box functions. Each iteration requires maximizing an acquisition function, such as the upper confidence bound (UCB) or a sample path from the Gaussian process (GP) posterior, as in Thompson sampling (TS). However, finding an exact solution to these maximization problems is often intractable and computationally expensive. Reflecting such realistic situations, in this paper, we delve into the effect of inexact maximizers of the acquisition functions. Defining a measure of inaccuracy in acquisition solutions, we establish cumulative regret bounds for both GP-UCB and GP-TS without requiring exact solutions of acquisition function maximization. Our results show that under appropriate conditions on accumulated inaccuracy, inexact BO algorithms can still achieve sublinear cumulative regret. Motivated from such findings, we provide both theoretical justification and numerical validation for random grid search as an effective and computationally efficient acquisition function solver.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-kim25b, title = {Bayesian Optimization with Inexact Acquisition: Is Random Grid Search Sufficient?}, author = {Kim, Hwanwoo and Liu, Chong and Chen, Yuxin}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {2202--2222}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/kim25b/kim25b.pdf}, url = {https://proceedings.mlr.press/v286/kim25b.html}, abstract = {Bayesian optimization (BO) is a widely used iterative algorithm for optimizing black-box functions. Each iteration requires maximizing an acquisition function, such as the upper confidence bound (UCB) or a sample path from the Gaussian process (GP) posterior, as in Thompson sampling (TS). However, finding an exact solution to these maximization problems is often intractable and computationally expensive. Reflecting such realistic situations, in this paper, we delve into the effect of inexact maximizers of the acquisition functions. Defining a measure of inaccuracy in acquisition solutions, we establish cumulative regret bounds for both GP-UCB and GP-TS without requiring exact solutions of acquisition function maximization. Our results show that under appropriate conditions on accumulated inaccuracy, inexact BO algorithms can still achieve sublinear cumulative regret. Motivated from such findings, we provide both theoretical justification and numerical validation for random grid search as an effective and computationally efficient acquisition function solver.} }
Endnote
%0 Conference Paper %T Bayesian Optimization with Inexact Acquisition: Is Random Grid Search Sufficient? %A Hwanwoo Kim %A Chong Liu %A Yuxin Chen %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-kim25b %I PMLR %P 2202--2222 %U https://proceedings.mlr.press/v286/kim25b.html %V 286 %X Bayesian optimization (BO) is a widely used iterative algorithm for optimizing black-box functions. Each iteration requires maximizing an acquisition function, such as the upper confidence bound (UCB) or a sample path from the Gaussian process (GP) posterior, as in Thompson sampling (TS). However, finding an exact solution to these maximization problems is often intractable and computationally expensive. Reflecting such realistic situations, in this paper, we delve into the effect of inexact maximizers of the acquisition functions. Defining a measure of inaccuracy in acquisition solutions, we establish cumulative regret bounds for both GP-UCB and GP-TS without requiring exact solutions of acquisition function maximization. Our results show that under appropriate conditions on accumulated inaccuracy, inexact BO algorithms can still achieve sublinear cumulative regret. Motivated from such findings, we provide both theoretical justification and numerical validation for random grid search as an effective and computationally efficient acquisition function solver.
APA
Kim, H., Liu, C. & Chen, Y.. (2025). Bayesian Optimization with Inexact Acquisition: Is Random Grid Search Sufficient?. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:2202-2222 Available from https://proceedings.mlr.press/v286/kim25b.html.

Related Material