Projection-free Distributed Online Learning in Networks

Wenpeng Zhang, Peilin Zhao, Wenwu Zhu, Steven C. H. Hoi, Tong Zhang
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:4054-4062, 2017.

Abstract

The conditional gradient algorithm has regained a surge of research interest in recent years due to its high efficiency in handling large-scale machine learning problems. However, none of existing studies has explored it in the distributed online learning setting, where locally light computation is assumed. In this paper, we fill this gap by proposing the distributed online conditional gradient algorithm, which eschews the expensive projection operation needed in its counterpart algorithms by exploiting much simpler linear optimization steps. We give a regret bound for the proposed algorithm as a function of the network size and topology, which will be smaller on smaller graphs or “well-connected” graphs. Experiments on two large-scale real-world datasets for a multiclass classification task confirm the computational benefit of the proposed algorithm and also verify the theoretical regret bound.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-zhang17g, title = {Projection-free Distributed Online Learning in Networks}, author = {Wenpeng Zhang and Peilin Zhao and Wenwu Zhu and Steven C. H. Hoi and Tong Zhang}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {4054--4062}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/zhang17g/zhang17g.pdf}, url = {https://proceedings.mlr.press/v70/zhang17g.html}, abstract = {The conditional gradient algorithm has regained a surge of research interest in recent years due to its high efficiency in handling large-scale machine learning problems. However, none of existing studies has explored it in the distributed online learning setting, where locally light computation is assumed. In this paper, we fill this gap by proposing the distributed online conditional gradient algorithm, which eschews the expensive projection operation needed in its counterpart algorithms by exploiting much simpler linear optimization steps. We give a regret bound for the proposed algorithm as a function of the network size and topology, which will be smaller on smaller graphs or “well-connected” graphs. Experiments on two large-scale real-world datasets for a multiclass classification task confirm the computational benefit of the proposed algorithm and also verify the theoretical regret bound.} }
Endnote
%0 Conference Paper %T Projection-free Distributed Online Learning in Networks %A Wenpeng Zhang %A Peilin Zhao %A Wenwu Zhu %A Steven C. H. Hoi %A Tong Zhang %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-zhang17g %I PMLR %P 4054--4062 %U https://proceedings.mlr.press/v70/zhang17g.html %V 70 %X The conditional gradient algorithm has regained a surge of research interest in recent years due to its high efficiency in handling large-scale machine learning problems. However, none of existing studies has explored it in the distributed online learning setting, where locally light computation is assumed. In this paper, we fill this gap by proposing the distributed online conditional gradient algorithm, which eschews the expensive projection operation needed in its counterpart algorithms by exploiting much simpler linear optimization steps. We give a regret bound for the proposed algorithm as a function of the network size and topology, which will be smaller on smaller graphs or “well-connected” graphs. Experiments on two large-scale real-world datasets for a multiclass classification task confirm the computational benefit of the proposed algorithm and also verify the theoretical regret bound.
APA
Zhang, W., Zhao, P., Zhu, W., Hoi, S.C.H. & Zhang, T.. (2017). Projection-free Distributed Online Learning in Networks. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:4054-4062 Available from https://proceedings.mlr.press/v70/zhang17g.html.

Related Material