[edit]
The Min-Max Complexity of Distributed Stochastic Convex Optimization with Intermittent Communication
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:4386-4437, 2021.
Abstract
We resolve the min-max complexity of distributed stochastic convex optimization (up to a log factor) in the intermittent communication setting, where $M$ machines work in parallel over the course of $R$ rounds of communication to optimize the objective, and during each round of communication, each machine may sequentially compute $K$ stochastic gradient estimates. We present a novel lower bound with a matching upper bound that establishes an optimal algorithm.