\ell_1,p-Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods

[edit]

Zirui Zhou, Qi Zhang, Anthony Man-Cho So ;
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1501-1510, 2015.

Abstract

Recently, \ell_1,p-regularization has been widely used to induce structured sparsity in the solutions to various optimization problems. Motivated by the desire to analyze the convergence rate of first-order methods, we show that for a large class of \ell_1,p-regularized problems, an error bound condition is satisfied when p∈[1,2] or p=∞but fails to hold for any p∈(2,∞). Based on this result, we show that many first-order methods enjoy an asymptotic linear rate of convergence when applied to \ell_1,p-regularized linear or logistic regression with p∈[1,2] or p=∞. By contrast, numerical experiments suggest that for the same class of problems with p∈(2,∞), the aforementioned methods may not converge linearly.

Related Material