Sketching Transformed Matrices with Applications to Natural Language Processing

Yingyu Liang, Zhao Song, Mengdi Wang, Lin Yang, Xin Yang
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:467-481, 2020.

Abstract

Suppose we are given a large matrix $A=(a_{i,j})$ that cannot be stored in memory but is in a disk or is presented in a data stream. However, we need to compute a matrix decomposition of the entry-wisely transformed matrix, $f(A):=(f(a_{i,j}))$ for some function $f$. Is it possible to do it in a space efficient way? Many machine learning applications indeed need to deal with such large transformed matrices, for example word embedding method in NLP needs to work with the pointwise mutual information (PMI) matrix, while the entrywise transformation makes it difficult to apply known linear algebraic tools. Existing approaches for this problem either need to store the whole matrix and perform the entry-wise transformation afterwards, which is space consuming or infeasible, or need to redesign the learning method, which is application specific and requires substantial remodeling.In this paper, we first propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix. It works for a general family of transformations with provable small error bounds and thus can be used as a primitive in downstream learning tasks. We then apply this primitive to two concrete applications: low-rank approximation and linear regressions. We show that our approach obtains small error and is efficient in both space and time. For instance, for a large $n\times n$ matrix $A$, we show that only $\tilde{O}(nk^3)$ space and a few scans over the matrix $A$ are needed to compute a rank-$k$ approximation of $\log(|A|+1)$ to a fixed accuracy. This is a nearly quadratic space improvement for small $k$. We complement our theoretical results with experiments of low-rank approximation on synthetic and real data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-liang20a, title = {Sketching Transformed Matrices with Applications to Natural Language Processing}, author = {Liang, Yingyu and Song, Zhao and Wang, Mengdi and Yang, Lin and Yang, Xin}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {467--481}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/liang20a/liang20a.pdf}, url = {https://proceedings.mlr.press/v108/liang20a.html}, abstract = {Suppose we are given a large matrix $A=(a_{i,j})$ that cannot be stored in memory but is in a disk or is presented in a data stream. However, we need to compute a matrix decomposition of the entry-wisely transformed matrix, $f(A):=(f(a_{i,j}))$ for some function $f$. Is it possible to do it in a space efficient way? Many machine learning applications indeed need to deal with such large transformed matrices, for example word embedding method in NLP needs to work with the pointwise mutual information (PMI) matrix, while the entrywise transformation makes it difficult to apply known linear algebraic tools. Existing approaches for this problem either need to store the whole matrix and perform the entry-wise transformation afterwards, which is space consuming or infeasible, or need to redesign the learning method, which is application specific and requires substantial remodeling.In this paper, we first propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix. It works for a general family of transformations with provable small error bounds and thus can be used as a primitive in downstream learning tasks. We then apply this primitive to two concrete applications: low-rank approximation and linear regressions. We show that our approach obtains small error and is efficient in both space and time. For instance, for a large $n\times n$ matrix $A$, we show that only $\tilde{O}(nk^3)$ space and a few scans over the matrix $A$ are needed to compute a rank-$k$ approximation of $\log(|A|+1)$ to a fixed accuracy. This is a nearly quadratic space improvement for small $k$. We complement our theoretical results with experiments of low-rank approximation on synthetic and real data. } }
Endnote
%0 Conference Paper %T Sketching Transformed Matrices with Applications to Natural Language Processing %A Yingyu Liang %A Zhao Song %A Mengdi Wang %A Lin Yang %A Xin Yang %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-liang20a %I PMLR %P 467--481 %U https://proceedings.mlr.press/v108/liang20a.html %V 108 %X Suppose we are given a large matrix $A=(a_{i,j})$ that cannot be stored in memory but is in a disk or is presented in a data stream. However, we need to compute a matrix decomposition of the entry-wisely transformed matrix, $f(A):=(f(a_{i,j}))$ for some function $f$. Is it possible to do it in a space efficient way? Many machine learning applications indeed need to deal with such large transformed matrices, for example word embedding method in NLP needs to work with the pointwise mutual information (PMI) matrix, while the entrywise transformation makes it difficult to apply known linear algebraic tools. Existing approaches for this problem either need to store the whole matrix and perform the entry-wise transformation afterwards, which is space consuming or infeasible, or need to redesign the learning method, which is application specific and requires substantial remodeling.In this paper, we first propose a space-efficient sketching algorithm for computing the product of a given small matrix with the transformed matrix. It works for a general family of transformations with provable small error bounds and thus can be used as a primitive in downstream learning tasks. We then apply this primitive to two concrete applications: low-rank approximation and linear regressions. We show that our approach obtains small error and is efficient in both space and time. For instance, for a large $n\times n$ matrix $A$, we show that only $\tilde{O}(nk^3)$ space and a few scans over the matrix $A$ are needed to compute a rank-$k$ approximation of $\log(|A|+1)$ to a fixed accuracy. This is a nearly quadratic space improvement for small $k$. We complement our theoretical results with experiments of low-rank approximation on synthetic and real data.
APA
Liang, Y., Song, Z., Wang, M., Yang, L. & Yang, X.. (2020). Sketching Transformed Matrices with Applications to Natural Language Processing. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:467-481 Available from https://proceedings.mlr.press/v108/liang20a.html.

Related Material