How to Fake Multiply by a Gaussian Matrix

Michael Kapralov, Vamsi Potluru, David Woodruff
; Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2101-2110, 2016.

Abstract

Have you ever wanted to multiply an n \times d matrix X, with n ≫d, on the left by an m \times n matrix \tilde G of i.i.d. Gaussian random variables, but could not afford to do it because it was too slow? In this work we propose a new randomized m \times n matrix T, for which one can compute T ⋅X in only O(nnz(X)) + \tilde O(m^1.5 ⋅d^3) time, for which the total variation distance between the distributions T ⋅X and \tilde G ⋅X is as small as desired, i.e., less than any positive constant. Here nnz(X) denotes the number of non-zero entries of X. Assuming nnz(X) ≫m^1.5 ⋅d^3, this is a significant savings over the naïve O(nnz(X) m) time to compute \tilde G ⋅X. Moreover, since the total variation distance is small, we can provably use T ⋅X in place of \tilde G ⋅X in any application and have the same guarantees as if we were using \tilde G ⋅X, up to a small positive constant in error probability. We apply this transform to nonnegative matrix factorization (NMF) and support vector machines (SVM).

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-kapralov16, title = {How to Fake Multiply by a Gaussian Matrix}, author = {Michael Kapralov and Vamsi Potluru and David Woodruff}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {2101--2110}, year = {2016}, editor = {Maria Florina Balcan and Kilian Q. Weinberger}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/kapralov16.pdf}, url = {http://proceedings.mlr.press/v48/kapralov16.html}, abstract = {Have you ever wanted to multiply an n \times d matrix X, with n ≫d, on the left by an m \times n matrix \tilde G of i.i.d. Gaussian random variables, but could not afford to do it because it was too slow? In this work we propose a new randomized m \times n matrix T, for which one can compute T ⋅X in only O(nnz(X)) + \tilde O(m^1.5 ⋅d^3) time, for which the total variation distance between the distributions T ⋅X and \tilde G ⋅X is as small as desired, i.e., less than any positive constant. Here nnz(X) denotes the number of non-zero entries of X. Assuming nnz(X) ≫m^1.5 ⋅d^3, this is a significant savings over the naïve O(nnz(X) m) time to compute \tilde G ⋅X. Moreover, since the total variation distance is small, we can provably use T ⋅X in place of \tilde G ⋅X in any application and have the same guarantees as if we were using \tilde G ⋅X, up to a small positive constant in error probability. We apply this transform to nonnegative matrix factorization (NMF) and support vector machines (SVM).} }
Endnote
%0 Conference Paper %T How to Fake Multiply by a Gaussian Matrix %A Michael Kapralov %A Vamsi Potluru %A David Woodruff %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-kapralov16 %I PMLR %J Proceedings of Machine Learning Research %P 2101--2110 %U http://proceedings.mlr.press %V 48 %W PMLR %X Have you ever wanted to multiply an n \times d matrix X, with n ≫d, on the left by an m \times n matrix \tilde G of i.i.d. Gaussian random variables, but could not afford to do it because it was too slow? In this work we propose a new randomized m \times n matrix T, for which one can compute T ⋅X in only O(nnz(X)) + \tilde O(m^1.5 ⋅d^3) time, for which the total variation distance between the distributions T ⋅X and \tilde G ⋅X is as small as desired, i.e., less than any positive constant. Here nnz(X) denotes the number of non-zero entries of X. Assuming nnz(X) ≫m^1.5 ⋅d^3, this is a significant savings over the naïve O(nnz(X) m) time to compute \tilde G ⋅X. Moreover, since the total variation distance is small, we can provably use T ⋅X in place of \tilde G ⋅X in any application and have the same guarantees as if we were using \tilde G ⋅X, up to a small positive constant in error probability. We apply this transform to nonnegative matrix factorization (NMF) and support vector machines (SVM).
RIS
TY - CPAPER TI - How to Fake Multiply by a Gaussian Matrix AU - Michael Kapralov AU - Vamsi Potluru AU - David Woodruff BT - Proceedings of The 33rd International Conference on Machine Learning PY - 2016/06/11 DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-kapralov16 PB - PMLR SP - 2101 DP - PMLR EP - 2110 L1 - http://proceedings.mlr.press/v48/kapralov16.pdf UR - http://proceedings.mlr.press/v48/kapralov16.html AB - Have you ever wanted to multiply an n \times d matrix X, with n ≫d, on the left by an m \times n matrix \tilde G of i.i.d. Gaussian random variables, but could not afford to do it because it was too slow? In this work we propose a new randomized m \times n matrix T, for which one can compute T ⋅X in only O(nnz(X)) + \tilde O(m^1.5 ⋅d^3) time, for which the total variation distance between the distributions T ⋅X and \tilde G ⋅X is as small as desired, i.e., less than any positive constant. Here nnz(X) denotes the number of non-zero entries of X. Assuming nnz(X) ≫m^1.5 ⋅d^3, this is a significant savings over the naïve O(nnz(X) m) time to compute \tilde G ⋅X. Moreover, since the total variation distance is small, we can provably use T ⋅X in place of \tilde G ⋅X in any application and have the same guarantees as if we were using \tilde G ⋅X, up to a small positive constant in error probability. We apply this transform to nonnegative matrix factorization (NMF) and support vector machines (SVM). ER -
APA
Kapralov, M., Potluru, V. & Woodruff, D.. (2016). How to Fake Multiply by a Gaussian Matrix. Proceedings of The 33rd International Conference on Machine Learning, in PMLR 48:2101-2110

Related Material