Max vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexity

Santosh S. Vempala, Ying. Xiao
Proceedings of The 28th Conference on Learning Theory, PMLR 40:1710-1723, 2015.

Abstract

We present a simple, general technique for reducing the sample complexity of matrix and tensor decomposition algorithms applied to distributions. We use the technique to give a polynomial-time algorithm for standard ICA with sample complexity nearly linear in the dimension, thereby improving substantially on previous bounds. The analysis is based on properties of random polynomials, namely the spacings of an ensemble of polynomials. Our technique also applies to other applications of tensor decompositions, including spherical Gaussian mixture models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v40-Vempala15, title = {Max vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexity}, author = {Vempala, Santosh S. and Xiao, Ying.}, booktitle = {Proceedings of The 28th Conference on Learning Theory}, pages = {1710--1723}, year = {2015}, editor = {Grünwald, Peter and Hazan, Elad and Kale, Satyen}, volume = {40}, series = {Proceedings of Machine Learning Research}, address = {Paris, France}, month = {03--06 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v40/Vempala15.pdf}, url = {https://proceedings.mlr.press/v40/Vempala15.html}, abstract = {We present a simple, general technique for reducing the sample complexity of matrix and tensor decomposition algorithms applied to distributions. We use the technique to give a polynomial-time algorithm for standard ICA with sample complexity nearly linear in the dimension, thereby improving substantially on previous bounds. The analysis is based on properties of random polynomials, namely the spacings of an ensemble of polynomials. Our technique also applies to other applications of tensor decompositions, including spherical Gaussian mixture models.} }
Endnote
%0 Conference Paper %T Max vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexity %A Santosh S. Vempala %A Ying. Xiao %B Proceedings of The 28th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2015 %E Peter Grünwald %E Elad Hazan %E Satyen Kale %F pmlr-v40-Vempala15 %I PMLR %P 1710--1723 %U https://proceedings.mlr.press/v40/Vempala15.html %V 40 %X We present a simple, general technique for reducing the sample complexity of matrix and tensor decomposition algorithms applied to distributions. We use the technique to give a polynomial-time algorithm for standard ICA with sample complexity nearly linear in the dimension, thereby improving substantially on previous bounds. The analysis is based on properties of random polynomials, namely the spacings of an ensemble of polynomials. Our technique also applies to other applications of tensor decompositions, including spherical Gaussian mixture models.
RIS
TY - CPAPER TI - Max vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexity AU - Santosh S. Vempala AU - Ying. Xiao BT - Proceedings of The 28th Conference on Learning Theory DA - 2015/06/26 ED - Peter Grünwald ED - Elad Hazan ED - Satyen Kale ID - pmlr-v40-Vempala15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 40 SP - 1710 EP - 1723 L1 - http://proceedings.mlr.press/v40/Vempala15.pdf UR - https://proceedings.mlr.press/v40/Vempala15.html AB - We present a simple, general technique for reducing the sample complexity of matrix and tensor decomposition algorithms applied to distributions. We use the technique to give a polynomial-time algorithm for standard ICA with sample complexity nearly linear in the dimension, thereby improving substantially on previous bounds. The analysis is based on properties of random polynomials, namely the spacings of an ensemble of polynomials. Our technique also applies to other applications of tensor decompositions, including spherical Gaussian mixture models. ER -
APA
Vempala, S.S. & Xiao, Y.. (2015). Max vs Min: Tensor Decomposition and ICA with nearly Linear Sample Complexity. Proceedings of The 28th Conference on Learning Theory, in Proceedings of Machine Learning Research 40:1710-1723 Available from https://proceedings.mlr.press/v40/Vempala15.html.

Related Material