The Noisy Laplacian: a Threshold Phenomenon for Non-Linear Dimension Reduction

Alex Kokot, Octavian-Vlad Murad, Marina Meila
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:31158-31181, 2025.

Abstract

In this paper, we clarify the effect of noise on common spectrally motivated algorithms such as Diffusion Maps (DM) for dimension reduction. Empirically, these methods are much more robust to noise than current work suggests. Specifically, existing consistency results require that either the noise amplitude or dimensionality must vary with the sample size $n$. We provide new theoretical results demonstrating that low-frequency eigenpairs reliably capture the geometry of the underlying manifold under a constant noise level, up to a dimension independent threshold $O(r^{-2})$, where $r$ is the noise amplitude. Our results rely on a decomposition of the manifold Laplacian in the Sasaki metric, a technique not used before in this area, to our knowledge. We experimentally validate our theoretical predictions. Additionally, we observe similar robust behavior for other manifold learning algorithms which are not based on computing the Laplacian, namely LTSA and VAE.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-kokot25a, title = {The Noisy Laplacian: a Threshold Phenomenon for Non-Linear Dimension Reduction}, author = {Kokot, Alex and Murad, Octavian-Vlad and Meila, Marina}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {31158--31181}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/kokot25a/kokot25a.pdf}, url = {https://proceedings.mlr.press/v267/kokot25a.html}, abstract = {In this paper, we clarify the effect of noise on common spectrally motivated algorithms such as Diffusion Maps (DM) for dimension reduction. Empirically, these methods are much more robust to noise than current work suggests. Specifically, existing consistency results require that either the noise amplitude or dimensionality must vary with the sample size $n$. We provide new theoretical results demonstrating that low-frequency eigenpairs reliably capture the geometry of the underlying manifold under a constant noise level, up to a dimension independent threshold $O(r^{-2})$, where $r$ is the noise amplitude. Our results rely on a decomposition of the manifold Laplacian in the Sasaki metric, a technique not used before in this area, to our knowledge. We experimentally validate our theoretical predictions. Additionally, we observe similar robust behavior for other manifold learning algorithms which are not based on computing the Laplacian, namely LTSA and VAE.} }
Endnote
%0 Conference Paper %T The Noisy Laplacian: a Threshold Phenomenon for Non-Linear Dimension Reduction %A Alex Kokot %A Octavian-Vlad Murad %A Marina Meila %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-kokot25a %I PMLR %P 31158--31181 %U https://proceedings.mlr.press/v267/kokot25a.html %V 267 %X In this paper, we clarify the effect of noise on common spectrally motivated algorithms such as Diffusion Maps (DM) for dimension reduction. Empirically, these methods are much more robust to noise than current work suggests. Specifically, existing consistency results require that either the noise amplitude or dimensionality must vary with the sample size $n$. We provide new theoretical results demonstrating that low-frequency eigenpairs reliably capture the geometry of the underlying manifold under a constant noise level, up to a dimension independent threshold $O(r^{-2})$, where $r$ is the noise amplitude. Our results rely on a decomposition of the manifold Laplacian in the Sasaki metric, a technique not used before in this area, to our knowledge. We experimentally validate our theoretical predictions. Additionally, we observe similar robust behavior for other manifold learning algorithms which are not based on computing the Laplacian, namely LTSA and VAE.
APA
Kokot, A., Murad, O. & Meila, M.. (2025). The Noisy Laplacian: a Threshold Phenomenon for Non-Linear Dimension Reduction. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:31158-31181 Available from https://proceedings.mlr.press/v267/kokot25a.html.

Related Material