[edit]
Gaussian Sketching yields a J-L Lemma in RKHS
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3928-3937, 2020.
Abstract
The main contribution of the paper is to show that Gaussian sketching of a kernel-Gram matrix \bmK yields an operator whose counterpart in an RKHS H, is a \emph{random projection} operator—in the spirit of Johnson-Lindenstrauss (J-L) lemma. To be precise, given a random matrix Z with i.i.d. Gaussian entries, we show that a sketch Z\bmK corresponds to a particular random operator in (infinite-dimensional) Hilbert space H that maps functions f∈H to a low-dimensional space \bbRd, while preserving a weighted RKHS inner-product of the form ⟨f,g⟩Σ≐⟨f,Σ3g⟩H, where Σ is the \emph{covariance} operator induced by the data distribution. In particular, under similar assumptions as in kernel PCA (KPCA), or kernel k-means (K-k-means), well-separated subsets of feature-space {K(⋅,x):x∈X} remain well-separated after such operation, which suggests similar benefits as in KPCA and/or K-k-means, albeit at the much cheaper cost of a random projection. In particular, our convergence rates suggest that, given a large dataset {Xi}Ni=1 of size N, we can build the Gram matrix \bmK on a much smaller subsample of size n≪N, so that the sketch Z\bmK is very cheap to obtain and subsequently apply as a projection operator on the original data {Xi}Ni=1. We verify these insights empirically on synthetic data, and on real-world clustering applications.