On Equivalence of Martingale Tail Bounds and Deterministic Regret Inequalities
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:1704-1722, 2017.
We study an equivalence of (i) deterministic pathwise statements appearing in the online learning literature (termed \emphregret bounds), (ii) high-probability tail bounds for the supremum of a collection of martingales (of a specific form arising from uniform laws of large numbers), and (iii) in-expectation bounds for the supremum. By virtue of the equivalence, we prove exponential tail bounds for norms of Banach space valued martingales via deterministic regret bounds for the online mirror descent algorithm with an adaptive step size. We show that the phenomenon extends beyond the setting of online linear optimization and present the equivalence for the supervised online learning setting.