[edit]
One Gradient Frank-Wolfe for Decentralized Online Convex and Submodular Optimization
Proceedings of The 14th Asian Conference on Machine
Learning, PMLR 189:802-815, 2023.
Abstract
Decentralized learning has been studied intensively
in recent years motivated by its wide applications
in the context of federated learning. The majority
of previous research focuses on the offline setting
in which the objective function is static. However,
the offline setting becomes unrealistic in numerous
machine learning applications that witness the
change of massive data. In this paper, we propose
\emph{decentralized online} algorithm for convex and
continuous DR-submodular optimization, two classes
of functions that are present in a variety of
machine learning problems. Our algorithms achieve
performance guarantees comparable to those in the
centralized offline setting. Moreover, on average,
each participant performs only a \emph{single}
gradient computation per time step. Subsequently, we
extend our algorithms to the bandit
setting. Finally, we illustrate the competitive
performance of our algorithms in real-world
experiments.