Contextual bandits with continuous actions: Smoothing, zooming, and adapting

[edit]

Akshay Krishnamurthy, John Langford, Aleksandrs Slivkins, Chicheng Zhang ;
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2025-2027, 2019.

Abstract

We study contextual bandit learning for any competitor policy class and continuous action space. We obtain two qualitatively different regret bounds: one competes with a smoothed version of the policy class under no continuity assumptions, while the other requires standard Lipschitz assumptions. Both bounds exhibit data-dependent “zooming" behavior and, with no tuning, yield improved guarantees for benign problems. We also study adapting to unknown smoothness parameters, establishing a price-of-adaptivity and deriving optimal adaptive algorithms that require no additional information.

Related Material