Learning Sparsely Used Overcomplete Dictionaries

Alekh Agarwal, Animashree Anandkumar, Prateek Jain, Praneeth Netrapalli, Rashish Tandon
Proceedings of The 27th Conference on Learning Theory, PMLR 35:123-137, 2014.

Abstract

We consider the problem of learning sparsely used overcomplete dictionaries, where each observation is a sparse combination of elements from an unknown overcomplete dictionary. We establish exact recovery when the dictionary elements are mutually incoherent. Our method consists of a clustering-based initialization step, which provides an approximate estimate of the true dictionary with guaranteed accuracy. This estimate is then refined via an iterative algorithm with the following alternating steps: 1) estimation of the dictionary coefficients for each observation through \ell_1 minimization, given the dictionary estimate, and 2) estimation of the dictionary elements through least squares, given the coefficient estimates. We establish that, under a set of sufficient conditions, our method converges at a linear rate to the true dictionary as well as the true coefficients for each observation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v35-agarwal14a, title = {Learning Sparsely Used Overcomplete Dictionaries}, author = {Agarwal, Alekh and Anandkumar, Animashree and Jain, Prateek and Netrapalli, Praneeth and Tandon, Rashish}, booktitle = {Proceedings of The 27th Conference on Learning Theory}, pages = {123--137}, year = {2014}, editor = {Balcan, Maria Florina and Feldman, Vitaly and Szepesvári, Csaba}, volume = {35}, series = {Proceedings of Machine Learning Research}, address = {Barcelona, Spain}, month = {13--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v35/agarwal14a.pdf}, url = {https://proceedings.mlr.press/v35/agarwal14a.html}, abstract = {We consider the problem of learning sparsely used overcomplete dictionaries, where each observation is a sparse combination of elements from an unknown overcomplete dictionary. We establish exact recovery when the dictionary elements are mutually incoherent. Our method consists of a clustering-based initialization step, which provides an approximate estimate of the true dictionary with guaranteed accuracy. This estimate is then refined via an iterative algorithm with the following alternating steps: 1) estimation of the dictionary coefficients for each observation through \ell_1 minimization, given the dictionary estimate, and 2) estimation of the dictionary elements through least squares, given the coefficient estimates. We establish that, under a set of sufficient conditions, our method converges at a linear rate to the true dictionary as well as the true coefficients for each observation.} }
Endnote
%0 Conference Paper %T Learning Sparsely Used Overcomplete Dictionaries %A Alekh Agarwal %A Animashree Anandkumar %A Prateek Jain %A Praneeth Netrapalli %A Rashish Tandon %B Proceedings of The 27th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2014 %E Maria Florina Balcan %E Vitaly Feldman %E Csaba Szepesvári %F pmlr-v35-agarwal14a %I PMLR %P 123--137 %U https://proceedings.mlr.press/v35/agarwal14a.html %V 35 %X We consider the problem of learning sparsely used overcomplete dictionaries, where each observation is a sparse combination of elements from an unknown overcomplete dictionary. We establish exact recovery when the dictionary elements are mutually incoherent. Our method consists of a clustering-based initialization step, which provides an approximate estimate of the true dictionary with guaranteed accuracy. This estimate is then refined via an iterative algorithm with the following alternating steps: 1) estimation of the dictionary coefficients for each observation through \ell_1 minimization, given the dictionary estimate, and 2) estimation of the dictionary elements through least squares, given the coefficient estimates. We establish that, under a set of sufficient conditions, our method converges at a linear rate to the true dictionary as well as the true coefficients for each observation.
RIS
TY - CPAPER TI - Learning Sparsely Used Overcomplete Dictionaries AU - Alekh Agarwal AU - Animashree Anandkumar AU - Prateek Jain AU - Praneeth Netrapalli AU - Rashish Tandon BT - Proceedings of The 27th Conference on Learning Theory DA - 2014/05/29 ED - Maria Florina Balcan ED - Vitaly Feldman ED - Csaba Szepesvári ID - pmlr-v35-agarwal14a PB - PMLR DP - Proceedings of Machine Learning Research VL - 35 SP - 123 EP - 137 L1 - http://proceedings.mlr.press/v35/agarwal14a.pdf UR - https://proceedings.mlr.press/v35/agarwal14a.html AB - We consider the problem of learning sparsely used overcomplete dictionaries, where each observation is a sparse combination of elements from an unknown overcomplete dictionary. We establish exact recovery when the dictionary elements are mutually incoherent. Our method consists of a clustering-based initialization step, which provides an approximate estimate of the true dictionary with guaranteed accuracy. This estimate is then refined via an iterative algorithm with the following alternating steps: 1) estimation of the dictionary coefficients for each observation through \ell_1 minimization, given the dictionary estimate, and 2) estimation of the dictionary elements through least squares, given the coefficient estimates. We establish that, under a set of sufficient conditions, our method converges at a linear rate to the true dictionary as well as the true coefficients for each observation. ER -
APA
Agarwal, A., Anandkumar, A., Jain, P., Netrapalli, P. & Tandon, R.. (2014). Learning Sparsely Used Overcomplete Dictionaries. Proceedings of The 27th Conference on Learning Theory, in Proceedings of Machine Learning Research 35:123-137 Available from https://proceedings.mlr.press/v35/agarwal14a.html.

Related Material