[edit]
Distributed Inexact Newton-type Pursuit for Non-convex Sparse Learning
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:343-352, 2019.
Abstract
In this paper, we present a sample distributed greedy pursuit method for non-convex sparse learning under cardinality constraint. Given the training samples uniformly randomly partitioned across multiple machines, the proposed method alternates between local inexact sparse minimization of a Newton-type approximation and centralized global results aggregation. Theoretical analysis shows that for a general class of convex functions with Lipschitze continues Hessian, the method converges linearly with contraction factor scaling inversely to the local data size; whilst the communication complexity required to reach desirable statistical accuracy scales logarithmically with respect to the number of machines for some popular statistical learning models. For nonconvex objective functions, up to a local estimation error, our method can be shown to converge to a local stationary sparse solution with sub-linear communication complexity. Numerical results demonstrate the efficiency and accuracy of our method when applied to large-scale sparse learning tasks including deep neural nets pruning