Gaussian Sketching yields a J-L Lemma in RKHS

Samory Kpotufe, Bharath Sriperumbudur
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3928-3937, 2020.

Abstract

The main contribution of the paper is to show that Gaussian sketching of a kernel-Gram matrix $\bm K$ yields an operator whose counterpart in an RKHS $\cal H$, is a \emph{random projection} operator—in the spirit of Johnson-Lindenstrauss (J-L) lemma. To be precise, given a random matrix $Z$ with i.i.d. Gaussian entries, we show that a sketch $Z\bm{K}$ corresponds to a particular random operator in (infinite-dimensional) Hilbert space $\cal H$ that maps functions $f \in \cal H$ to a low-dimensional space $\bb R^d$, while preserving a weighted RKHS inner-product of the form $⟨f, g \rangle_{\Sigma} \doteq ⟨f, \Sigma^3 g \rangle_{\cal H}$, where $\Sigma$ is the \emph{covariance} operator induced by the data distribution. In particular, under similar assumptions as in kernel PCA (KPCA), or kernel $k$-means (K-$k$-means), well-separated subsets of feature-space $\{K(\cdot, x): x \in \cal X\}$ remain well-separated after such operation, which suggests similar benefits as in KPCA and/or K-$k$-means, albeit at the much cheaper cost of a random projection. In particular, our convergence rates suggest that, given a large dataset $\{X_i\}_{i=1}^N$ of size $N$, we can build the Gram matrix $\bm K$ on a much smaller subsample of size $n\ll N$, so that the sketch $Z\bm K$ is very cheap to obtain and subsequently apply as a projection operator on the original data $\{X_i\}_{i=1}^N$. We verify these insights empirically on synthetic data, and on real-world clustering applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-kpotufe20a, title = {Gaussian Sketching yields a J-L Lemma in RKHS}, author = {Kpotufe, Samory and Sriperumbudur, Bharath}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {3928--3937}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/kpotufe20a/kpotufe20a.pdf}, url = {https://proceedings.mlr.press/v108/kpotufe20a.html}, abstract = {The main contribution of the paper is to show that Gaussian sketching of a kernel-Gram matrix $\bm K$ yields an operator whose counterpart in an RKHS $\cal H$, is a \emph{random projection} operator—in the spirit of Johnson-Lindenstrauss (J-L) lemma. To be precise, given a random matrix $Z$ with i.i.d. Gaussian entries, we show that a sketch $Z\bm{K}$ corresponds to a particular random operator in (infinite-dimensional) Hilbert space $\cal H$ that maps functions $f \in \cal H$ to a low-dimensional space $\bb R^d$, while preserving a weighted RKHS inner-product of the form $⟨f, g \rangle_{\Sigma} \doteq ⟨f, \Sigma^3 g \rangle_{\cal H}$, where $\Sigma$ is the \emph{covariance} operator induced by the data distribution. In particular, under similar assumptions as in kernel PCA (KPCA), or kernel $k$-means (K-$k$-means), well-separated subsets of feature-space $\{K(\cdot, x): x \in \cal X\}$ remain well-separated after such operation, which suggests similar benefits as in KPCA and/or K-$k$-means, albeit at the much cheaper cost of a random projection. In particular, our convergence rates suggest that, given a large dataset $\{X_i\}_{i=1}^N$ of size $N$, we can build the Gram matrix $\bm K$ on a much smaller subsample of size $n\ll N$, so that the sketch $Z\bm K$ is very cheap to obtain and subsequently apply as a projection operator on the original data $\{X_i\}_{i=1}^N$. We verify these insights empirically on synthetic data, and on real-world clustering applications.} }
Endnote
%0 Conference Paper %T Gaussian Sketching yields a J-L Lemma in RKHS %A Samory Kpotufe %A Bharath Sriperumbudur %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-kpotufe20a %I PMLR %P 3928--3937 %U https://proceedings.mlr.press/v108/kpotufe20a.html %V 108 %X The main contribution of the paper is to show that Gaussian sketching of a kernel-Gram matrix $\bm K$ yields an operator whose counterpart in an RKHS $\cal H$, is a \emph{random projection} operator—in the spirit of Johnson-Lindenstrauss (J-L) lemma. To be precise, given a random matrix $Z$ with i.i.d. Gaussian entries, we show that a sketch $Z\bm{K}$ corresponds to a particular random operator in (infinite-dimensional) Hilbert space $\cal H$ that maps functions $f \in \cal H$ to a low-dimensional space $\bb R^d$, while preserving a weighted RKHS inner-product of the form $⟨f, g \rangle_{\Sigma} \doteq ⟨f, \Sigma^3 g \rangle_{\cal H}$, where $\Sigma$ is the \emph{covariance} operator induced by the data distribution. In particular, under similar assumptions as in kernel PCA (KPCA), or kernel $k$-means (K-$k$-means), well-separated subsets of feature-space $\{K(\cdot, x): x \in \cal X\}$ remain well-separated after such operation, which suggests similar benefits as in KPCA and/or K-$k$-means, albeit at the much cheaper cost of a random projection. In particular, our convergence rates suggest that, given a large dataset $\{X_i\}_{i=1}^N$ of size $N$, we can build the Gram matrix $\bm K$ on a much smaller subsample of size $n\ll N$, so that the sketch $Z\bm K$ is very cheap to obtain and subsequently apply as a projection operator on the original data $\{X_i\}_{i=1}^N$. We verify these insights empirically on synthetic data, and on real-world clustering applications.
APA
Kpotufe, S. & Sriperumbudur, B.. (2020). Gaussian Sketching yields a J-L Lemma in RKHS. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:3928-3937 Available from https://proceedings.mlr.press/v108/kpotufe20a.html.

Related Material