[edit]
Adaptive Discretization for Adversarial Lipschitz Bandits
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:3788-3805, 2021.
Abstract
Lipschitz bandits is a prominent version of multi-armed bandits that studies large, structured action spaces such as the [0,1] interval, where similar actions are guaranteed to have similar rewards. A central theme here is the adaptive discretization of the action space, which gradually "zooms in" on the more promising regions thereof. The goal is to take advantage of "nicer" problem instances, while retaining near-optimal worst-case performance. While the stochastic version of the problem is well-understood, the general version with adversarial rewards is not. We provide the first algorithm for adaptive discretization in the adversarial version, and derive instance-dependent regret bounds. In particular, we recover the worst-case optimal regret bound for the adversarial version, and the instance-dependent regret bound for the stochastic version. A version with full proofs (and additional results) appears at arxiv.org/abs/2006.12367v2.