\ell_1,p-Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods

Zirui Zhou, Qi Zhang, Anthony Man-Cho So
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1501-1510, 2015.

Abstract

Recently, \ell_1,p-regularization has been widely used to induce structured sparsity in the solutions to various optimization problems. Motivated by the desire to analyze the convergence rate of first-order methods, we show that for a large class of \ell_1,p-regularized problems, an error bound condition is satisfied when p∈[1,2] or p=∞but fails to hold for any p∈(2,∞). Based on this result, we show that many first-order methods enjoy an asymptotic linear rate of convergence when applied to \ell_1,p-regularized linear or logistic regression with p∈[1,2] or p=∞. By contrast, numerical experiments suggest that for the same class of problems with p∈(2,∞), the aforementioned methods may not converge linearly.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-zhoub15, title = {$\ell_{1,p}$-Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods}, author = {Zhou, Zirui and Zhang, Qi and So, Anthony Man-Cho}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1501--1510}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/zhoub15.pdf}, url = {https://proceedings.mlr.press/v37/zhoub15.html}, abstract = {Recently, \ell_1,p-regularization has been widely used to induce structured sparsity in the solutions to various optimization problems. Motivated by the desire to analyze the convergence rate of first-order methods, we show that for a large class of \ell_1,p-regularized problems, an error bound condition is satisfied when p∈[1,2] or p=∞but fails to hold for any p∈(2,∞). Based on this result, we show that many first-order methods enjoy an asymptotic linear rate of convergence when applied to \ell_1,p-regularized linear or logistic regression with p∈[1,2] or p=∞. By contrast, numerical experiments suggest that for the same class of problems with p∈(2,∞), the aforementioned methods may not converge linearly.} }
Endnote
%0 Conference Paper %T \ell_1,p-Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods %A Zirui Zhou %A Qi Zhang %A Anthony Man-Cho So %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-zhoub15 %I PMLR %P 1501--1510 %U https://proceedings.mlr.press/v37/zhoub15.html %V 37 %X Recently, \ell_1,p-regularization has been widely used to induce structured sparsity in the solutions to various optimization problems. Motivated by the desire to analyze the convergence rate of first-order methods, we show that for a large class of \ell_1,p-regularized problems, an error bound condition is satisfied when p∈[1,2] or p=∞but fails to hold for any p∈(2,∞). Based on this result, we show that many first-order methods enjoy an asymptotic linear rate of convergence when applied to \ell_1,p-regularized linear or logistic regression with p∈[1,2] or p=∞. By contrast, numerical experiments suggest that for the same class of problems with p∈(2,∞), the aforementioned methods may not converge linearly.
RIS
TY - CPAPER TI - \ell_1,p-Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods AU - Zirui Zhou AU - Qi Zhang AU - Anthony Man-Cho So BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-zhoub15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1501 EP - 1510 L1 - http://proceedings.mlr.press/v37/zhoub15.pdf UR - https://proceedings.mlr.press/v37/zhoub15.html AB - Recently, \ell_1,p-regularization has been widely used to induce structured sparsity in the solutions to various optimization problems. Motivated by the desire to analyze the convergence rate of first-order methods, we show that for a large class of \ell_1,p-regularized problems, an error bound condition is satisfied when p∈[1,2] or p=∞but fails to hold for any p∈(2,∞). Based on this result, we show that many first-order methods enjoy an asymptotic linear rate of convergence when applied to \ell_1,p-regularized linear or logistic regression with p∈[1,2] or p=∞. By contrast, numerical experiments suggest that for the same class of problems with p∈(2,∞), the aforementioned methods may not converge linearly. ER -
APA
Zhou, Z., Zhang, Q. & So, A.M.. (2015). \ell_1,p-Norm Regularization: Error Bounds and Convergence Rate Analysis of First-Order Methods. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1501-1510 Available from https://proceedings.mlr.press/v37/zhoub15.html.

Related Material