Asynchronous Doubly Stochastic Group Regularized Learning

[edit]

Bin Gu, Zhouyuan Huo, Heng Huang ;
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:1791-1800, 2018.

Abstract

Group regularized learning problems (such as group Lasso) are important in machine learning. The asynchronous parallel stochastic optimization algorithms have received huge attentions recently as handling large scale problems. However, existing asynchronous stochastic algorithms for solving the group regularized learning problems are not scalable enough simultaneously in sample size and feature dimensionality. To address this challenging problem, in this paper, we propose a novel asynchronous doubly stochastic proximal gradient algorithm with variance reduction (AsyDSPG+). To the best of our knowledge, AsyDSPG+ is the first asynchronous doubly stochastic proximal gradient algorithm, which can scale well with the large sample size and high feature dimensionality simultaneously. More importantly, we provide a comprehensive convergence guarantee to AsyDSPG+. The experimental results on various large-scale real-world datasets not only confirm the fast convergence of our new method, but also show that AsyDSPG+ scales better than the existing algorithms with the sample size and dimension simultaneously.

Related Material