ML$^2$-GCL: Manifold Learning Inspired Lightweight Graph Contrastive Learning

Jianqing Liang, Zhiqiang Li, Xinkai Wei, Yuan Liu, Zhiqiang Wang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:37222-37236, 2025.

Abstract

Graph contrastive learning has attracted great interest as a dominant and promising self-supervised representation learning approach in recent years. While existing works follow the basic principle of pulling positive pairs closer and pushing negative pairs far away, they still suffer from several critical problems, such as the underlying semantic disturbance brought by augmentation strategies, the failure of GCN in capturing long-range dependence, rigidness and inefficiency of node sampling techniques. To address these issues, we propose Manifold Learning Inspired Lightweight Graph Contrastive Learning (ML$^2$-GCL), which inherits the merits of both manifold learning and GCN. ML$^2$-GCL avoids the potential risks of semantic disturbance with only one single view. It achieves global nonlinear structure recovery from locally linear fits, which can make up for the defects of GCN. The most amazing advantage is about the lightweight due to its closed-form solution of positive pairs weights and removal of pairwise distances calculation. Theoretical analysis proves the existence of the optimal closed-form solution. Extensive empirical results on various benchmarks and evaluation protocols demonstrate effectiveness and lightweight of ML$^2$-GCL. We release the code at https://github.com/a-hou/ML2-GCL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-liang25h, title = {{ML}$^2$-{GCL}: Manifold Learning Inspired Lightweight Graph Contrastive Learning}, author = {Liang, Jianqing and Li, Zhiqiang and Wei, Xinkai and Liu, Yuan and Wang, Zhiqiang}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {37222--37236}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/liang25h/liang25h.pdf}, url = {https://proceedings.mlr.press/v267/liang25h.html}, abstract = {Graph contrastive learning has attracted great interest as a dominant and promising self-supervised representation learning approach in recent years. While existing works follow the basic principle of pulling positive pairs closer and pushing negative pairs far away, they still suffer from several critical problems, such as the underlying semantic disturbance brought by augmentation strategies, the failure of GCN in capturing long-range dependence, rigidness and inefficiency of node sampling techniques. To address these issues, we propose Manifold Learning Inspired Lightweight Graph Contrastive Learning (ML$^2$-GCL), which inherits the merits of both manifold learning and GCN. ML$^2$-GCL avoids the potential risks of semantic disturbance with only one single view. It achieves global nonlinear structure recovery from locally linear fits, which can make up for the defects of GCN. The most amazing advantage is about the lightweight due to its closed-form solution of positive pairs weights and removal of pairwise distances calculation. Theoretical analysis proves the existence of the optimal closed-form solution. Extensive empirical results on various benchmarks and evaluation protocols demonstrate effectiveness and lightweight of ML$^2$-GCL. We release the code at https://github.com/a-hou/ML2-GCL.} }
Endnote
%0 Conference Paper %T ML$^2$-GCL: Manifold Learning Inspired Lightweight Graph Contrastive Learning %A Jianqing Liang %A Zhiqiang Li %A Xinkai Wei %A Yuan Liu %A Zhiqiang Wang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-liang25h %I PMLR %P 37222--37236 %U https://proceedings.mlr.press/v267/liang25h.html %V 267 %X Graph contrastive learning has attracted great interest as a dominant and promising self-supervised representation learning approach in recent years. While existing works follow the basic principle of pulling positive pairs closer and pushing negative pairs far away, they still suffer from several critical problems, such as the underlying semantic disturbance brought by augmentation strategies, the failure of GCN in capturing long-range dependence, rigidness and inefficiency of node sampling techniques. To address these issues, we propose Manifold Learning Inspired Lightweight Graph Contrastive Learning (ML$^2$-GCL), which inherits the merits of both manifold learning and GCN. ML$^2$-GCL avoids the potential risks of semantic disturbance with only one single view. It achieves global nonlinear structure recovery from locally linear fits, which can make up for the defects of GCN. The most amazing advantage is about the lightweight due to its closed-form solution of positive pairs weights and removal of pairwise distances calculation. Theoretical analysis proves the existence of the optimal closed-form solution. Extensive empirical results on various benchmarks and evaluation protocols demonstrate effectiveness and lightweight of ML$^2$-GCL. We release the code at https://github.com/a-hou/ML2-GCL.
APA
Liang, J., Li, Z., Wei, X., Liu, Y. & Wang, Z.. (2025). ML$^2$-GCL: Manifold Learning Inspired Lightweight Graph Contrastive Learning. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:37222-37236 Available from https://proceedings.mlr.press/v267/liang25h.html.

Related Material