Regret Bounds for Expected Improvement Algorithms in Gaussian Process Bandit Optimization

Hung Tran-The, Sunil Gupta, Santu Rana, Svetha Venkatesh
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:8715-8737, 2022.

Abstract

The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty due to its simplicity and efficiency. Despite its popularity, the theoretical aspects of this algorithm have not been properly analyzed. In particular, whether in the noisy setting, the EI strategy with a standard incumbent converges is still an open question of the Gaussian process bandit optimization problem. We aim to answer this question by proposing a variant of EI with a standard incumbent defined via the GP predictive mean. We prove that our algorithm converges, and achieves a cumulative regret bound of $\mathcal O(\gamma_T\sqrt{T})$, where $\gamma_T$ is the maximum information gain between $T$ observations and the Gaussian process model. Based on this variant of EI, we further propose an algorithm called Improved GP-EI that converges faster than previous counterparts. In particular, our proposed variants of EI do not require the knowledge of the RKHS norm and the noise’s sub-Gaussianity parameter as in previous works. Empirical validation in our paper demonstrates the effectiveness of our algorithms compared to several baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-tran-the22a, title = { Regret Bounds for Expected Improvement Algorithms in Gaussian Process Bandit Optimization }, author = {Tran-The, Hung and Gupta, Sunil and Rana, Santu and Venkatesh, Svetha}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {8715--8737}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/tran-the22a/tran-the22a.pdf}, url = {https://proceedings.mlr.press/v151/tran-the22a.html}, abstract = { The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty due to its simplicity and efficiency. Despite its popularity, the theoretical aspects of this algorithm have not been properly analyzed. In particular, whether in the noisy setting, the EI strategy with a standard incumbent converges is still an open question of the Gaussian process bandit optimization problem. We aim to answer this question by proposing a variant of EI with a standard incumbent defined via the GP predictive mean. We prove that our algorithm converges, and achieves a cumulative regret bound of $\mathcal O(\gamma_T\sqrt{T})$, where $\gamma_T$ is the maximum information gain between $T$ observations and the Gaussian process model. Based on this variant of EI, we further propose an algorithm called Improved GP-EI that converges faster than previous counterparts. In particular, our proposed variants of EI do not require the knowledge of the RKHS norm and the noise’s sub-Gaussianity parameter as in previous works. Empirical validation in our paper demonstrates the effectiveness of our algorithms compared to several baselines. } }
Endnote
%0 Conference Paper %T Regret Bounds for Expected Improvement Algorithms in Gaussian Process Bandit Optimization %A Hung Tran-The %A Sunil Gupta %A Santu Rana %A Svetha Venkatesh %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-tran-the22a %I PMLR %P 8715--8737 %U https://proceedings.mlr.press/v151/tran-the22a.html %V 151 %X The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty due to its simplicity and efficiency. Despite its popularity, the theoretical aspects of this algorithm have not been properly analyzed. In particular, whether in the noisy setting, the EI strategy with a standard incumbent converges is still an open question of the Gaussian process bandit optimization problem. We aim to answer this question by proposing a variant of EI with a standard incumbent defined via the GP predictive mean. We prove that our algorithm converges, and achieves a cumulative regret bound of $\mathcal O(\gamma_T\sqrt{T})$, where $\gamma_T$ is the maximum information gain between $T$ observations and the Gaussian process model. Based on this variant of EI, we further propose an algorithm called Improved GP-EI that converges faster than previous counterparts. In particular, our proposed variants of EI do not require the knowledge of the RKHS norm and the noise’s sub-Gaussianity parameter as in previous works. Empirical validation in our paper demonstrates the effectiveness of our algorithms compared to several baselines.
APA
Tran-The, H., Gupta, S., Rana, S. & Venkatesh, S.. (2022). Regret Bounds for Expected Improvement Algorithms in Gaussian Process Bandit Optimization . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:8715-8737 Available from https://proceedings.mlr.press/v151/tran-the22a.html.

Related Material