Sparsity-Based Generalization Bounds for Predictive Sparse Coding

Nishant Mehta, Alexander Gray
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):36-44, 2013.

Abstract

The goal of predictive sparse coding is to learn a representation of examples as sparse linear combinations of elements from a dictionary, such that a learned hypothesis linear in the new representation performs well on a predictive task. Predictive sparse coding has demonstrated impressive performance on a variety of supervised tasks, but its generalization properties have not been studied. We establish the first generalization error bounds for predictive sparse coding, in the overcomplete setting, where the number of features k exceeds the original dimensionality d. The learning bound decays as (sqrt(d k/m)) with respect to d, k, and the size m of the training sample. It depends intimately on stability properties of the learned sparse encoder, as measured on the training sample. Consequently, we also present a fundamental stability result for the LASSO, a result that characterizes the stability of the sparse codes with respect to dictionary perturbations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-mehta13, title = {Sparsity-Based Generalization Bounds for Predictive Sparse Coding}, author = {Mehta, Nishant and Gray, Alexander}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {36--44}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/mehta13.pdf}, url = {https://proceedings.mlr.press/v28/mehta13.html}, abstract = {The goal of predictive sparse coding is to learn a representation of examples as sparse linear combinations of elements from a dictionary, such that a learned hypothesis linear in the new representation performs well on a predictive task. Predictive sparse coding has demonstrated impressive performance on a variety of supervised tasks, but its generalization properties have not been studied. We establish the first generalization error bounds for predictive sparse coding, in the overcomplete setting, where the number of features k exceeds the original dimensionality d. The learning bound decays as (sqrt(d k/m)) with respect to d, k, and the size m of the training sample. It depends intimately on stability properties of the learned sparse encoder, as measured on the training sample. Consequently, we also present a fundamental stability result for the LASSO, a result that characterizes the stability of the sparse codes with respect to dictionary perturbations.} }
Endnote
%0 Conference Paper %T Sparsity-Based Generalization Bounds for Predictive Sparse Coding %A Nishant Mehta %A Alexander Gray %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-mehta13 %I PMLR %P 36--44 %U https://proceedings.mlr.press/v28/mehta13.html %V 28 %N 1 %X The goal of predictive sparse coding is to learn a representation of examples as sparse linear combinations of elements from a dictionary, such that a learned hypothesis linear in the new representation performs well on a predictive task. Predictive sparse coding has demonstrated impressive performance on a variety of supervised tasks, but its generalization properties have not been studied. We establish the first generalization error bounds for predictive sparse coding, in the overcomplete setting, where the number of features k exceeds the original dimensionality d. The learning bound decays as (sqrt(d k/m)) with respect to d, k, and the size m of the training sample. It depends intimately on stability properties of the learned sparse encoder, as measured on the training sample. Consequently, we also present a fundamental stability result for the LASSO, a result that characterizes the stability of the sparse codes with respect to dictionary perturbations.
RIS
TY - CPAPER TI - Sparsity-Based Generalization Bounds for Predictive Sparse Coding AU - Nishant Mehta AU - Alexander Gray BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-mehta13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 36 EP - 44 L1 - http://proceedings.mlr.press/v28/mehta13.pdf UR - https://proceedings.mlr.press/v28/mehta13.html AB - The goal of predictive sparse coding is to learn a representation of examples as sparse linear combinations of elements from a dictionary, such that a learned hypothesis linear in the new representation performs well on a predictive task. Predictive sparse coding has demonstrated impressive performance on a variety of supervised tasks, but its generalization properties have not been studied. We establish the first generalization error bounds for predictive sparse coding, in the overcomplete setting, where the number of features k exceeds the original dimensionality d. The learning bound decays as (sqrt(d k/m)) with respect to d, k, and the size m of the training sample. It depends intimately on stability properties of the learned sparse encoder, as measured on the training sample. Consequently, we also present a fundamental stability result for the LASSO, a result that characterizes the stability of the sparse codes with respect to dictionary perturbations. ER -
APA
Mehta, N. & Gray, A.. (2013). Sparsity-Based Generalization Bounds for Predictive Sparse Coding. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):36-44 Available from https://proceedings.mlr.press/v28/mehta13.html.

Related Material