[edit]
Nearly Optimal Regret for Decentralized Online Convex Optimization
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4862-4888, 2024.
Abstract
We investigate decentralized online convex optimization (D-OCO), in which a set of local learners are required to minimize a sequence of global loss functions using only local computations and communications. Previous studies have established O(n5/4ρ−1/2√T) and O(n3/2ρ−1logT) regret bounds for convex and strongly convex functions respectively, where n is the number of local learners, ρ<1 is the spectral gap of the communication matrix, and T is the time horizon. However, there exist large gaps from the existing lower bounds, i.e., Ω(n√T) for convex functions and Ω(n) for strongly convex functions. To fill these gaps, in this paper, we first develop novel D-OCO algorithms that can respectively reduce the regret bounds for convex and strongly convex functions to ˜O(nρ−1/4√T) and ˜O(nρ−1/2logT). The primary technique is to design an online accelerated gossip strategy that enjoys a faster average consensus among local learners. Furthermore, by carefully exploiting the spectral properties of a specific network topology, we enhance the lower bounds for convex and strongly convex functions to Ω(nρ−1/4√T) and Ω(nρ−1/2), respectively. These lower bounds suggest that our algorithms are nearly optimal in terms of T, n, and ρ.