Bayesian Optimistic Optimisation with Exponentially Decaying Regret

Hung Tran-The, Sunil Gupta, Santu Rana, Svetha Venkatesh
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:10390-10400, 2021.

Abstract

Bayesian optimisation (BO) is a well known algorithm for finding the global optimum of expensive, black-box functions. The current practical BO algorithms have regret bounds ranging from $\mathcal{O}(\frac{logN}{\sqrt{N}})$ to $\mathcal O(e^{-\sqrt{N}})$, where $N$ is the number of evaluations. This paper explores the possibility of improving the regret bound in the noise-free setting by intertwining concepts from BO and optimistic optimisation methods which are based on partitioning the search space. We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $\mathcal O(N^{-\sqrt{N}})$ under the assumption that the objective function is sampled from a Gaussian process with a Matérn kernel with smoothness parameter $\nu > 4 +\frac{D}{2}$, where $D$ is the number of dimensions. We perform experiments on optimisation of various synthetic functions and machine learning hyperparameter tuning tasks and show that our algorithm outperforms baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-tran-the21a, title = {Bayesian Optimistic Optimisation with Exponentially Decaying Regret}, author = {Tran-The, Hung and Gupta, Sunil and Rana, Santu and Venkatesh, Svetha}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {10390--10400}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/tran-the21a/tran-the21a.pdf}, url = {https://proceedings.mlr.press/v139/tran-the21a.html}, abstract = {Bayesian optimisation (BO) is a well known algorithm for finding the global optimum of expensive, black-box functions. The current practical BO algorithms have regret bounds ranging from $\mathcal{O}(\frac{logN}{\sqrt{N}})$ to $\mathcal O(e^{-\sqrt{N}})$, where $N$ is the number of evaluations. This paper explores the possibility of improving the regret bound in the noise-free setting by intertwining concepts from BO and optimistic optimisation methods which are based on partitioning the search space. We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $\mathcal O(N^{-\sqrt{N}})$ under the assumption that the objective function is sampled from a Gaussian process with a Matérn kernel with smoothness parameter $\nu > 4 +\frac{D}{2}$, where $D$ is the number of dimensions. We perform experiments on optimisation of various synthetic functions and machine learning hyperparameter tuning tasks and show that our algorithm outperforms baselines.} }
Endnote
%0 Conference Paper %T Bayesian Optimistic Optimisation with Exponentially Decaying Regret %A Hung Tran-The %A Sunil Gupta %A Santu Rana %A Svetha Venkatesh %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-tran-the21a %I PMLR %P 10390--10400 %U https://proceedings.mlr.press/v139/tran-the21a.html %V 139 %X Bayesian optimisation (BO) is a well known algorithm for finding the global optimum of expensive, black-box functions. The current practical BO algorithms have regret bounds ranging from $\mathcal{O}(\frac{logN}{\sqrt{N}})$ to $\mathcal O(e^{-\sqrt{N}})$, where $N$ is the number of evaluations. This paper explores the possibility of improving the regret bound in the noise-free setting by intertwining concepts from BO and optimistic optimisation methods which are based on partitioning the search space. We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $\mathcal O(N^{-\sqrt{N}})$ under the assumption that the objective function is sampled from a Gaussian process with a Matérn kernel with smoothness parameter $\nu > 4 +\frac{D}{2}$, where $D$ is the number of dimensions. We perform experiments on optimisation of various synthetic functions and machine learning hyperparameter tuning tasks and show that our algorithm outperforms baselines.
APA
Tran-The, H., Gupta, S., Rana, S. & Venkatesh, S.. (2021). Bayesian Optimistic Optimisation with Exponentially Decaying Regret. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:10390-10400 Available from https://proceedings.mlr.press/v139/tran-the21a.html.

Related Material