Sample-Optimal Low-Rank Approximation of Distance Matrices

Pitor Indyk, Ali Vakilian, Tal Wagner, David P Woodruff
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1723-1751, 2019.

Abstract

A distance matrix $A \in \mathbb{R}^{n \times m}$ represents all pairwise distances, $A_{i,j} = d(x_i,y_j)$, between two point sets $x_1,\dotsc,x_n$ and $y_1,\dotsc,y_m$ in an arbitrary metric space $(\mathcal{Z},d)$. Such matrices arise in various computational contexts such as learning image manifolds, handwriting recognition, and multi-dimensional unfolding. In this work we study algorithms for low-rank approximation of distance matrices. Recent work by Bakshi and Woodruff (NeurIPS 2018) showed it is possible to compute a rank-$k$ approximation of a distance matrix in time $O((n+m)^{1+\gamma}) \mathrm{poly}(k,1/\epsilon)$, where $\epsilon>0$ is an error parameter and $\gamma>0$ is an arbitrarily small constant. Notably, their bound is sublinear in the matrix size, which is unachieveable for general matrices. We present an algorithm that is both simpler and more efficient. It reads only $O((n+m)k/\epsilon)$ entries of the input matrix, and has a running time of $O(n+m) \cdot \mathrm{poly}(k,1/\epsilon)$. We complement the sample complexity of our algorithm with a matching lower bound on the number of entries that must be ready by any algorithm. We provide experimental results to validate the approximation quality and running time of our algorithm

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-indyk19a, title = {Sample-Optimal Low-Rank Approximation of Distance Matrices}, author = {Indyk, Pitor and Vakilian, Ali and Wagner, Tal and Woodruff, David P}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {1723--1751}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/indyk19a/indyk19a.pdf}, url = {https://proceedings.mlr.press/v99/indyk19a.html}, abstract = {A distance matrix $A \in \mathbb{R}^{n \times m}$ represents all pairwise distances, $A_{i,j} = d(x_i,y_j)$, between two point sets $x_1,\dotsc,x_n$ and $y_1,\dotsc,y_m$ in an arbitrary metric space $(\mathcal{Z},d)$. Such matrices arise in various computational contexts such as learning image manifolds, handwriting recognition, and multi-dimensional unfolding. In this work we study algorithms for low-rank approximation of distance matrices. Recent work by Bakshi and Woodruff (NeurIPS 2018) showed it is possible to compute a rank-$k$ approximation of a distance matrix in time $O((n+m)^{1+\gamma}) \mathrm{poly}(k,1/\epsilon)$, where $\epsilon>0$ is an error parameter and $\gamma>0$ is an arbitrarily small constant. Notably, their bound is sublinear in the matrix size, which is unachieveable for general matrices. We present an algorithm that is both simpler and more efficient. It reads only $O((n+m)k/\epsilon)$ entries of the input matrix, and has a running time of $O(n+m) \cdot \mathrm{poly}(k,1/\epsilon)$. We complement the sample complexity of our algorithm with a matching lower bound on the number of entries that must be ready by any algorithm. We provide experimental results to validate the approximation quality and running time of our algorithm} }
Endnote
%0 Conference Paper %T Sample-Optimal Low-Rank Approximation of Distance Matrices %A Pitor Indyk %A Ali Vakilian %A Tal Wagner %A David P Woodruff %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-indyk19a %I PMLR %P 1723--1751 %U https://proceedings.mlr.press/v99/indyk19a.html %V 99 %X A distance matrix $A \in \mathbb{R}^{n \times m}$ represents all pairwise distances, $A_{i,j} = d(x_i,y_j)$, between two point sets $x_1,\dotsc,x_n$ and $y_1,\dotsc,y_m$ in an arbitrary metric space $(\mathcal{Z},d)$. Such matrices arise in various computational contexts such as learning image manifolds, handwriting recognition, and multi-dimensional unfolding. In this work we study algorithms for low-rank approximation of distance matrices. Recent work by Bakshi and Woodruff (NeurIPS 2018) showed it is possible to compute a rank-$k$ approximation of a distance matrix in time $O((n+m)^{1+\gamma}) \mathrm{poly}(k,1/\epsilon)$, where $\epsilon>0$ is an error parameter and $\gamma>0$ is an arbitrarily small constant. Notably, their bound is sublinear in the matrix size, which is unachieveable for general matrices. We present an algorithm that is both simpler and more efficient. It reads only $O((n+m)k/\epsilon)$ entries of the input matrix, and has a running time of $O(n+m) \cdot \mathrm{poly}(k,1/\epsilon)$. We complement the sample complexity of our algorithm with a matching lower bound on the number of entries that must be ready by any algorithm. We provide experimental results to validate the approximation quality and running time of our algorithm
APA
Indyk, P., Vakilian, A., Wagner, T. & Woodruff, D.P.. (2019). Sample-Optimal Low-Rank Approximation of Distance Matrices. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:1723-1751 Available from https://proceedings.mlr.press/v99/indyk19a.html.

Related Material