One Gradient Frank-Wolfe for Decentralized Online Convex and Submodular Optimization

Tuan-Anh Nguyen, Nguyen Kim Thang, Denis Trystram
Proceedings of The 14th Asian Conference on Machine Learning, PMLR 189:802-815, 2023.

Abstract

Decentralized learning has been studied intensively in recent years motivated by its wide applications in the context of federated learning. The majority of previous research focuses on the offline setting in which the objective function is static. However, the offline setting becomes unrealistic in numerous machine learning applications that witness the change of massive data. In this paper, we propose \emph{decentralized online} algorithm for convex and continuous DR-submodular optimization, two classes of functions that are present in a variety of machine learning problems. Our algorithms achieve performance guarantees comparable to those in the centralized offline setting. Moreover, on average, each participant performs only a \emph{single} gradient computation per time step. Subsequently, we extend our algorithms to the bandit setting. Finally, we illustrate the competitive performance of our algorithms in real-world experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v189-nguyen23a, title = {One Gradient Frank-Wolfe for Decentralized Online Convex and Submodular Optimization}, author = {Nguyen, Tuan-Anh and Kim Thang, Nguyen and Trystram, Denis}, booktitle = {Proceedings of The 14th Asian Conference on Machine Learning}, pages = {802--815}, year = {2023}, editor = {Khan, Emtiyaz and Gonen, Mehmet}, volume = {189}, series = {Proceedings of Machine Learning Research}, month = {12--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v189/nguyen23a/nguyen23a.pdf}, url = {https://proceedings.mlr.press/v189/nguyen23a.html}, abstract = {Decentralized learning has been studied intensively in recent years motivated by its wide applications in the context of federated learning. The majority of previous research focuses on the offline setting in which the objective function is static. However, the offline setting becomes unrealistic in numerous machine learning applications that witness the change of massive data. In this paper, we propose \emph{decentralized online} algorithm for convex and continuous DR-submodular optimization, two classes of functions that are present in a variety of machine learning problems. Our algorithms achieve performance guarantees comparable to those in the centralized offline setting. Moreover, on average, each participant performs only a \emph{single} gradient computation per time step. Subsequently, we extend our algorithms to the bandit setting. Finally, we illustrate the competitive performance of our algorithms in real-world experiments.} }
Endnote
%0 Conference Paper %T One Gradient Frank-Wolfe for Decentralized Online Convex and Submodular Optimization %A Tuan-Anh Nguyen %A Nguyen Kim Thang %A Denis Trystram %B Proceedings of The 14th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Emtiyaz Khan %E Mehmet Gonen %F pmlr-v189-nguyen23a %I PMLR %P 802--815 %U https://proceedings.mlr.press/v189/nguyen23a.html %V 189 %X Decentralized learning has been studied intensively in recent years motivated by its wide applications in the context of federated learning. The majority of previous research focuses on the offline setting in which the objective function is static. However, the offline setting becomes unrealistic in numerous machine learning applications that witness the change of massive data. In this paper, we propose \emph{decentralized online} algorithm for convex and continuous DR-submodular optimization, two classes of functions that are present in a variety of machine learning problems. Our algorithms achieve performance guarantees comparable to those in the centralized offline setting. Moreover, on average, each participant performs only a \emph{single} gradient computation per time step. Subsequently, we extend our algorithms to the bandit setting. Finally, we illustrate the competitive performance of our algorithms in real-world experiments.
APA
Nguyen, T., Kim Thang, N. & Trystram, D.. (2023). One Gradient Frank-Wolfe for Decentralized Online Convex and Submodular Optimization. Proceedings of The 14th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 189:802-815 Available from https://proceedings.mlr.press/v189/nguyen23a.html.

Related Material