[edit]
The Noisy Laplacian: a Threshold Phenomenon for Non-Linear Dimension Reduction
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:31158-31181, 2025.
Abstract
In this paper, we clarify the effect of noise on common spectrally motivated algorithms such as Diffusion Maps (DM) for dimension reduction. Empirically, these methods are much more robust to noise than current work suggests. Specifically, existing consistency results require that either the noise amplitude or dimensionality must vary with the sample size $n$. We provide new theoretical results demonstrating that low-frequency eigenpairs reliably capture the geometry of the underlying manifold under a constant noise level, up to a dimension independent threshold $O(r^{-2})$, where $r$ is the noise amplitude. Our results rely on a decomposition of the manifold Laplacian in the Sasaki metric, a technique not used before in this area, to our knowledge. We experimentally validate our theoretical predictions. Additionally, we observe similar robust behavior for other manifold learning algorithms which are not based on computing the Laplacian, namely LTSA and VAE.