When Does Self-Supervision Help Graph Convolutional Networks?

Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:10871-10880, 2020.

Abstract

Self-supervision as an emerging technique has been employed to train convolutional neural networks (CNNs) for more transferrable, generalizable, and robust representation learning of images. Its introduction to graph convolutional networks (GCNs) operating on graph data is however rarely explored. In this study, we report the first systematic exploration and assessment of incorporating self-supervision into GCNs. We first elaborate three mechanisms to incorporate self-supervision into GCNs, analyze the limitations of pretraining & finetuning and self-training, and proceed to focus on multi-task learning. Moreover, we propose to investigate three novel self-supervised learning tasks for GCNs with theoretical rationales and numerical comparisons. Lastly, we further integrate multi-task self-supervision into graph adversarial training. Our results show that, with properly designed task forms and incorporation mechanisms, self-supervision benefits GCNs in gaining more generalizability and robustness. Our codes are available at https://github.com/Shen-Lab/SS-GCNs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-you20a, title = {When Does Self-Supervision Help Graph Convolutional Networks?}, author = {You, Yuning and Chen, Tianlong and Wang, Zhangyang and Shen, Yang}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {10871--10880}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/you20a/you20a.pdf}, url = {https://proceedings.mlr.press/v119/you20a.html}, abstract = {Self-supervision as an emerging technique has been employed to train convolutional neural networks (CNNs) for more transferrable, generalizable, and robust representation learning of images. Its introduction to graph convolutional networks (GCNs) operating on graph data is however rarely explored. In this study, we report the first systematic exploration and assessment of incorporating self-supervision into GCNs. We first elaborate three mechanisms to incorporate self-supervision into GCNs, analyze the limitations of pretraining & finetuning and self-training, and proceed to focus on multi-task learning. Moreover, we propose to investigate three novel self-supervised learning tasks for GCNs with theoretical rationales and numerical comparisons. Lastly, we further integrate multi-task self-supervision into graph adversarial training. Our results show that, with properly designed task forms and incorporation mechanisms, self-supervision benefits GCNs in gaining more generalizability and robustness. Our codes are available at https://github.com/Shen-Lab/SS-GCNs.} }
Endnote
%0 Conference Paper %T When Does Self-Supervision Help Graph Convolutional Networks? %A Yuning You %A Tianlong Chen %A Zhangyang Wang %A Yang Shen %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-you20a %I PMLR %P 10871--10880 %U https://proceedings.mlr.press/v119/you20a.html %V 119 %X Self-supervision as an emerging technique has been employed to train convolutional neural networks (CNNs) for more transferrable, generalizable, and robust representation learning of images. Its introduction to graph convolutional networks (GCNs) operating on graph data is however rarely explored. In this study, we report the first systematic exploration and assessment of incorporating self-supervision into GCNs. We first elaborate three mechanisms to incorporate self-supervision into GCNs, analyze the limitations of pretraining & finetuning and self-training, and proceed to focus on multi-task learning. Moreover, we propose to investigate three novel self-supervised learning tasks for GCNs with theoretical rationales and numerical comparisons. Lastly, we further integrate multi-task self-supervision into graph adversarial training. Our results show that, with properly designed task forms and incorporation mechanisms, self-supervision benefits GCNs in gaining more generalizability and robustness. Our codes are available at https://github.com/Shen-Lab/SS-GCNs.
APA
You, Y., Chen, T., Wang, Z. & Shen, Y.. (2020). When Does Self-Supervision Help Graph Convolutional Networks?. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:10871-10880 Available from https://proceedings.mlr.press/v119/you20a.html.

Related Material