Boosting Graph Contrastive Learning via Graph Contrastive Saliency

Chunyu Wei, Yu Wang, Bing Bai, Kai Ni, David Brady, Lu Fang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:36839-36855, 2023.

Abstract

Graph augmentation plays a crucial role in achieving good generalization for contrastive graph self-supervised learning. However, mainstream Graph Contrastive Learning (GCL) often favors random graph augmentations, by relying on random node dropout or edge perturbation on graphs. Random augmentations may inevitably lead to semantic information corruption during the training, and force the network to mistakenly focus on semantically irrelevant environmental background structures. To address these limitations and to improve generalization, we propose a novel self-supervised learning framework for GCL, which can adaptively screen the semantic-related substructure in graphs by capitalizing on the proposed gradient-based Graph Contrastive Saliency (GCS). The goal is to identify the most semantically discriminative structures of a graph via contrastive learning, such that we can generate semantically meaningful augmentations by leveraging on saliency. Empirical evidence on 16 benchmark datasets demonstrates the exclusive merits of the GCS-based framework. We also provide rigorous theoretical justification for GCS’s robustness properties. Code is available at https://github.com/GCS2023/GCS .

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-wei23c, title = {Boosting Graph Contrastive Learning via Graph Contrastive Saliency}, author = {Wei, Chunyu and Wang, Yu and Bai, Bing and Ni, Kai and Brady, David and Fang, Lu}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {36839--36855}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/wei23c/wei23c.pdf}, url = {https://proceedings.mlr.press/v202/wei23c.html}, abstract = {Graph augmentation plays a crucial role in achieving good generalization for contrastive graph self-supervised learning. However, mainstream Graph Contrastive Learning (GCL) often favors random graph augmentations, by relying on random node dropout or edge perturbation on graphs. Random augmentations may inevitably lead to semantic information corruption during the training, and force the network to mistakenly focus on semantically irrelevant environmental background structures. To address these limitations and to improve generalization, we propose a novel self-supervised learning framework for GCL, which can adaptively screen the semantic-related substructure in graphs by capitalizing on the proposed gradient-based Graph Contrastive Saliency (GCS). The goal is to identify the most semantically discriminative structures of a graph via contrastive learning, such that we can generate semantically meaningful augmentations by leveraging on saliency. Empirical evidence on 16 benchmark datasets demonstrates the exclusive merits of the GCS-based framework. We also provide rigorous theoretical justification for GCS’s robustness properties. Code is available at https://github.com/GCS2023/GCS .} }
Endnote
%0 Conference Paper %T Boosting Graph Contrastive Learning via Graph Contrastive Saliency %A Chunyu Wei %A Yu Wang %A Bing Bai %A Kai Ni %A David Brady %A Lu Fang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-wei23c %I PMLR %P 36839--36855 %U https://proceedings.mlr.press/v202/wei23c.html %V 202 %X Graph augmentation plays a crucial role in achieving good generalization for contrastive graph self-supervised learning. However, mainstream Graph Contrastive Learning (GCL) often favors random graph augmentations, by relying on random node dropout or edge perturbation on graphs. Random augmentations may inevitably lead to semantic information corruption during the training, and force the network to mistakenly focus on semantically irrelevant environmental background structures. To address these limitations and to improve generalization, we propose a novel self-supervised learning framework for GCL, which can adaptively screen the semantic-related substructure in graphs by capitalizing on the proposed gradient-based Graph Contrastive Saliency (GCS). The goal is to identify the most semantically discriminative structures of a graph via contrastive learning, such that we can generate semantically meaningful augmentations by leveraging on saliency. Empirical evidence on 16 benchmark datasets demonstrates the exclusive merits of the GCS-based framework. We also provide rigorous theoretical justification for GCS’s robustness properties. Code is available at https://github.com/GCS2023/GCS .
APA
Wei, C., Wang, Y., Bai, B., Ni, K., Brady, D. & Fang, L.. (2023). Boosting Graph Contrastive Learning via Graph Contrastive Saliency. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:36839-36855 Available from https://proceedings.mlr.press/v202/wei23c.html.

Related Material