Sparse Solutions to Nonnegative Linear Systems and Applications

Aditya Bhaskara, Ananda Suresh, Morteza Zadimoghaddam
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, PMLR 38:83-92, 2015.

Abstract

We give an efficient algorithm for finding sparse approximate solutions to linear systems of equations with nonnegative coefficients. Unlike most known results for sparse recovery, we do not require \emphany assumption on the matrix other than non-negativity. Our algorithm is combinatorial in nature, inspired by techniques for the “set cover” problem, as well as the multiplicative weight update method. We then present a natural application to learning mixture models in the PAC framework. For learning a mixture of k axis-aligned Gaussians in d dimensions, we give an algorithm that outputs a mixture of O(k/ε^3) Gaussians that is ε-close in statistical distance to the true distribution, without any separation assumptions. The time and sample complexity is roughly O(kd/ε^3)^d. This is polynomial when d is constant – precisely the regime in which known methods fail to identify the components efficiently. Given that non-negativity is a natural assumption, we believe that our result may find use in other settings in which we wish to approximately “explain” data using a small number of a (large) candidate set of components.

Cite this Paper


BibTeX
@InProceedings{pmlr-v38-bhaskara15, title = {{Sparse Solutions to Nonnegative Linear Systems and Applications}}, author = {Bhaskara, Aditya and Suresh, Ananda and Zadimoghaddam, Morteza}, booktitle = {Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics}, pages = {83--92}, year = {2015}, editor = {Lebanon, Guy and Vishwanathan, S. V. N.}, volume = {38}, series = {Proceedings of Machine Learning Research}, address = {San Diego, California, USA}, month = {09--12 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v38/bhaskara15.pdf}, url = {https://proceedings.mlr.press/v38/bhaskara15.html}, abstract = {We give an efficient algorithm for finding sparse approximate solutions to linear systems of equations with nonnegative coefficients. Unlike most known results for sparse recovery, we do not require \emphany assumption on the matrix other than non-negativity. Our algorithm is combinatorial in nature, inspired by techniques for the “set cover” problem, as well as the multiplicative weight update method. We then present a natural application to learning mixture models in the PAC framework. For learning a mixture of k axis-aligned Gaussians in d dimensions, we give an algorithm that outputs a mixture of O(k/ε^3) Gaussians that is ε-close in statistical distance to the true distribution, without any separation assumptions. The time and sample complexity is roughly O(kd/ε^3)^d. This is polynomial when d is constant – precisely the regime in which known methods fail to identify the components efficiently. Given that non-negativity is a natural assumption, we believe that our result may find use in other settings in which we wish to approximately “explain” data using a small number of a (large) candidate set of components.} }
Endnote
%0 Conference Paper %T Sparse Solutions to Nonnegative Linear Systems and Applications %A Aditya Bhaskara %A Ananda Suresh %A Morteza Zadimoghaddam %B Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2015 %E Guy Lebanon %E S. V. N. Vishwanathan %F pmlr-v38-bhaskara15 %I PMLR %P 83--92 %U https://proceedings.mlr.press/v38/bhaskara15.html %V 38 %X We give an efficient algorithm for finding sparse approximate solutions to linear systems of equations with nonnegative coefficients. Unlike most known results for sparse recovery, we do not require \emphany assumption on the matrix other than non-negativity. Our algorithm is combinatorial in nature, inspired by techniques for the “set cover” problem, as well as the multiplicative weight update method. We then present a natural application to learning mixture models in the PAC framework. For learning a mixture of k axis-aligned Gaussians in d dimensions, we give an algorithm that outputs a mixture of O(k/ε^3) Gaussians that is ε-close in statistical distance to the true distribution, without any separation assumptions. The time and sample complexity is roughly O(kd/ε^3)^d. This is polynomial when d is constant – precisely the regime in which known methods fail to identify the components efficiently. Given that non-negativity is a natural assumption, we believe that our result may find use in other settings in which we wish to approximately “explain” data using a small number of a (large) candidate set of components.
RIS
TY - CPAPER TI - Sparse Solutions to Nonnegative Linear Systems and Applications AU - Aditya Bhaskara AU - Ananda Suresh AU - Morteza Zadimoghaddam BT - Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics DA - 2015/02/21 ED - Guy Lebanon ED - S. V. N. Vishwanathan ID - pmlr-v38-bhaskara15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 38 SP - 83 EP - 92 L1 - http://proceedings.mlr.press/v38/bhaskara15.pdf UR - https://proceedings.mlr.press/v38/bhaskara15.html AB - We give an efficient algorithm for finding sparse approximate solutions to linear systems of equations with nonnegative coefficients. Unlike most known results for sparse recovery, we do not require \emphany assumption on the matrix other than non-negativity. Our algorithm is combinatorial in nature, inspired by techniques for the “set cover” problem, as well as the multiplicative weight update method. We then present a natural application to learning mixture models in the PAC framework. For learning a mixture of k axis-aligned Gaussians in d dimensions, we give an algorithm that outputs a mixture of O(k/ε^3) Gaussians that is ε-close in statistical distance to the true distribution, without any separation assumptions. The time and sample complexity is roughly O(kd/ε^3)^d. This is polynomial when d is constant – precisely the regime in which known methods fail to identify the components efficiently. Given that non-negativity is a natural assumption, we believe that our result may find use in other settings in which we wish to approximately “explain” data using a small number of a (large) candidate set of components. ER -
APA
Bhaskara, A., Suresh, A. & Zadimoghaddam, M.. (2015). Sparse Solutions to Nonnegative Linear Systems and Applications. Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 38:83-92 Available from https://proceedings.mlr.press/v38/bhaskara15.html.

Related Material