[edit]
Lower Bounds on Regret for Noisy Gaussian Process Bandit Optimization
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1723-1742, 2017.
Abstract
In this paper, we consider the problem of sequentially optimizing a black-box function f based on noisy samples and bandit feedback. We assume that f is smooth in the sense of having a bounded norm in some reproducing kernel Hilbert space (RKHS), yielding a commonly-considered non-Bayesian form of Gaussian process bandit optimization. We provide algorithm-independent lower bounds on the simple regret, measuring the suboptimality of a single point reported after T rounds, and on the cumulative regret, measuring the sum of regrets over the T chosen points. For the isotropic squared-exponential kernel in d dimensions, we find that an average simple regret of ε requires T = Ω\big(\frac1ε^2 (\log\frac1ε)^d/2\big), and the average cumulative regret is at least Ω\big( \sqrt{T}(\log T)^d \big), thus matching existing upper bounds up to the replacement of d/2 by d+O(1) in both cases. For the Matérn-ν kernel, we give analogous bounds of the form Ω\big( (\frac1ε)^2+d/ν\big) and Ω\big( T^\fracν+ d2ν+ d \big), and discuss the resulting gaps to the existing upper bounds.