Tight Regret Bounds for Bayesian Optimization in One Dimension
[edit]
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:45004508, 2018.
Abstract
We consider the problem of Bayesian optimization (BO) in one dimension, under a Gaussian process prior and Gaussian sampling noise. We provide a theoretical analysis showing that, under fairly mild technical assumptions on the kernel, the best possible cumulative regret up to time $T$ behaves as $\Omega(\sqrt{T})$ and $O(\sqrt{T\log T})$. This gives a tight characterization up to a $\sqrt{\log T}$ factor, and includes the first nontrivial lower bound for noisy BO. Our assumptions are satisfied, for example, by the squared exponential and Matérn$\nu$ kernels, with the latter requiring $\nu > 2$. Our results certify the nearoptimality of existing bounds (Srinivas et al., 2009) for the SE kernel, while proving them to be strictly suboptimal for the Matérn kernel with $\nu > 2$.
Related Material


