Contrastive Multi-View Representation Learning on Graphs

Kaveh Hassani, Amir Hosein Khasahmadi
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:4116-4126, 2020.

Abstract

We introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs. We show that unlike visual representation learning, increasing the number of views to more than two or contrasting multi-scale encodings do not improve performance, and the best performance is achieved by contrasting encodings from first-order neighbors and a graph diffusion. We achieve new state-of-the-art results in self-supervised learning on 8 out of 8 node and graph classification benchmarks under the linear evaluation protocol. For example, on Cora (node) and Reddit-Binary (graph) classification benchmarks, we achieve 86.8% and 84.5% accuracy, which are 5.5% and 2.4% relative improvements over previous state-of-the-art. When compared to supervised baselines, our approach outperforms them in 4 out of 8 benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-hassani20a, title = {Contrastive Multi-View Representation Learning on Graphs}, author = {Hassani, Kaveh and Khasahmadi, Amir Hosein}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {4116--4126}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/hassani20a/hassani20a.pdf}, url = {http://proceedings.mlr.press/v119/hassani20a.html}, abstract = {We introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs. We show that unlike visual representation learning, increasing the number of views to more than two or contrasting multi-scale encodings do not improve performance, and the best performance is achieved by contrasting encodings from first-order neighbors and a graph diffusion. We achieve new state-of-the-art results in self-supervised learning on 8 out of 8 node and graph classification benchmarks under the linear evaluation protocol. For example, on Cora (node) and Reddit-Binary (graph) classification benchmarks, we achieve 86.8% and 84.5% accuracy, which are 5.5% and 2.4% relative improvements over previous state-of-the-art. When compared to supervised baselines, our approach outperforms them in 4 out of 8 benchmarks.} }
Endnote
%0 Conference Paper %T Contrastive Multi-View Representation Learning on Graphs %A Kaveh Hassani %A Amir Hosein Khasahmadi %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-hassani20a %I PMLR %P 4116--4126 %U http://proceedings.mlr.press/v119/hassani20a.html %V 119 %X We introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs. We show that unlike visual representation learning, increasing the number of views to more than two or contrasting multi-scale encodings do not improve performance, and the best performance is achieved by contrasting encodings from first-order neighbors and a graph diffusion. We achieve new state-of-the-art results in self-supervised learning on 8 out of 8 node and graph classification benchmarks under the linear evaluation protocol. For example, on Cora (node) and Reddit-Binary (graph) classification benchmarks, we achieve 86.8% and 84.5% accuracy, which are 5.5% and 2.4% relative improvements over previous state-of-the-art. When compared to supervised baselines, our approach outperforms them in 4 out of 8 benchmarks.
APA
Hassani, K. & Khasahmadi, A.H.. (2020). Contrastive Multi-View Representation Learning on Graphs. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:4116-4126 Available from http://proceedings.mlr.press/v119/hassani20a.html.

Related Material