GCVR: Reconstruction from Cross-View Enable Sufficient and Robust Graph Contrastive Learning

Qianlong Wen, Zhongyu Ouyang, Chunhui Zhang, Yiyue Qian, Chuxu Zhang, Yanfang Ye
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, PMLR 244:3747-3764, 2024.

Abstract

Among the existing self-supervised learning (SSL) methods for graphs, graph contrastive learning (GCL) frameworks usually automatically generate supervision by transforming the same graph into different views through graph augmentation operations. The computation-efficient augmentation techniques enable the prevalent usage of GCL to alleviate the supervision shortage issue. Despite the remarkable performance of those GCL methods, the InfoMax principle used to guide the optimization of GCL has been proven to be insufficient to avoid redundant information without losing important features. In light of this, we introduce the Graph Contrastive Learning with Cross-View Reconstruction (GCVR), aiming to learn robust and sufficient representation from graph data. Specifically, GCVR introduces a cross-view reconstruction mechanism based on conventional graph contrastive learning to elicit those essential features from raw graphs. Besides, we introduce an extra adversarial view perturbed from the original view in the contrastive loss to pursue the intactness of the graph semantics and strengthen the representation robustness. We empirically demonstrate that our proposed model outperforms the state-of-the-art baselines on graph classification tasks over multiple benchmark datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v244-wen24a, title = {GCVR: Reconstruction from Cross-View Enable Sufficient and Robust Graph Contrastive Learning}, author = {Wen, Qianlong and Ouyang, Zhongyu and Zhang, Chunhui and Qian, Yiyue and Zhang, Chuxu and Ye, Yanfang}, booktitle = {Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence}, pages = {3747--3764}, year = {2024}, editor = {Kiyavash, Negar and Mooij, Joris M.}, volume = {244}, series = {Proceedings of Machine Learning Research}, month = {15--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v244/main/assets/wen24a/wen24a.pdf}, url = {https://proceedings.mlr.press/v244/wen24a.html}, abstract = {Among the existing self-supervised learning (SSL) methods for graphs, graph contrastive learning (GCL) frameworks usually automatically generate supervision by transforming the same graph into different views through graph augmentation operations. The computation-efficient augmentation techniques enable the prevalent usage of GCL to alleviate the supervision shortage issue. Despite the remarkable performance of those GCL methods, the InfoMax principle used to guide the optimization of GCL has been proven to be insufficient to avoid redundant information without losing important features. In light of this, we introduce the Graph Contrastive Learning with Cross-View Reconstruction (GCVR), aiming to learn robust and sufficient representation from graph data. Specifically, GCVR introduces a cross-view reconstruction mechanism based on conventional graph contrastive learning to elicit those essential features from raw graphs. Besides, we introduce an extra adversarial view perturbed from the original view in the contrastive loss to pursue the intactness of the graph semantics and strengthen the representation robustness. We empirically demonstrate that our proposed model outperforms the state-of-the-art baselines on graph classification tasks over multiple benchmark datasets.} }
Endnote
%0 Conference Paper %T GCVR: Reconstruction from Cross-View Enable Sufficient and Robust Graph Contrastive Learning %A Qianlong Wen %A Zhongyu Ouyang %A Chunhui Zhang %A Yiyue Qian %A Chuxu Zhang %A Yanfang Ye %B Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2024 %E Negar Kiyavash %E Joris M. Mooij %F pmlr-v244-wen24a %I PMLR %P 3747--3764 %U https://proceedings.mlr.press/v244/wen24a.html %V 244 %X Among the existing self-supervised learning (SSL) methods for graphs, graph contrastive learning (GCL) frameworks usually automatically generate supervision by transforming the same graph into different views through graph augmentation operations. The computation-efficient augmentation techniques enable the prevalent usage of GCL to alleviate the supervision shortage issue. Despite the remarkable performance of those GCL methods, the InfoMax principle used to guide the optimization of GCL has been proven to be insufficient to avoid redundant information without losing important features. In light of this, we introduce the Graph Contrastive Learning with Cross-View Reconstruction (GCVR), aiming to learn robust and sufficient representation from graph data. Specifically, GCVR introduces a cross-view reconstruction mechanism based on conventional graph contrastive learning to elicit those essential features from raw graphs. Besides, we introduce an extra adversarial view perturbed from the original view in the contrastive loss to pursue the intactness of the graph semantics and strengthen the representation robustness. We empirically demonstrate that our proposed model outperforms the state-of-the-art baselines on graph classification tasks over multiple benchmark datasets.
APA
Wen, Q., Ouyang, Z., Zhang, C., Qian, Y., Zhang, C. & Ye, Y.. (2024). GCVR: Reconstruction from Cross-View Enable Sufficient and Robust Graph Contrastive Learning. Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 244:3747-3764 Available from https://proceedings.mlr.press/v244/wen24a.html.

Related Material