You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets

Tianjin Huang, Tianlong Chen, Meng Fang, Vlado Menkovski, Jiaxu Zhao, Lu Yin, Yulong Pei, Decebal Constantin Mocanu, Zhangyang Wang, Mykola Pechenizkiy, Shiwei Liu
Proceedings of the First Learning on Graphs Conference, PMLR 198:8:1-8:17, 2022.

Abstract

Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i.e., untrained networks). However, the presence of such untrained subnetworks in graph neural networks (GNNs) still remains mysterious. In this paper we carry out the first-of-its-kind exploration of discovering matching untrained GNNs. With sparsity as the core tool, we can find untrained sparse subnetworks at the initialization, that can match the performance of fully trained dense GNNs. Besides this already encouraging finding of comparable performance, we show that the found untrained subnetworks can substantially mitigate the GNN over-smoothing problem, hence becoming a powerful tool to enable deeper GNNs without bells and whistles. We also observe that such sparse untrained subnetworks have appealing performance in out-of-distribution detection and robustness of input perturbations. We evaluate our method across widely-used GNN architectures on various popular datasets including the Open Graph Benchmark (OGB).

Cite this Paper


BibTeX
@InProceedings{pmlr-v198-huang22a, title = {You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets}, author = {Huang, Tianjin and Chen, Tianlong and Fang, Meng and Menkovski, Vlado and Zhao, Jiaxu and Yin, Lu and Pei, Yulong and Mocanu, Decebal Constantin and Wang, Zhangyang and Pechenizkiy, Mykola and Liu, Shiwei}, booktitle = {Proceedings of the First Learning on Graphs Conference}, pages = {8:1--8:17}, year = {2022}, editor = {Rieck, Bastian and Pascanu, Razvan}, volume = {198}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v198/huang22a/huang22a.pdf}, url = {https://proceedings.mlr.press/v198/huang22a.html}, abstract = {Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i.e., untrained networks). However, the presence of such untrained subnetworks in graph neural networks (GNNs) still remains mysterious. In this paper we carry out the first-of-its-kind exploration of discovering matching untrained GNNs. With sparsity as the core tool, we can find untrained sparse subnetworks at the initialization, that can match the performance of fully trained dense GNNs. Besides this already encouraging finding of comparable performance, we show that the found untrained subnetworks can substantially mitigate the GNN over-smoothing problem, hence becoming a powerful tool to enable deeper GNNs without bells and whistles. We also observe that such sparse untrained subnetworks have appealing performance in out-of-distribution detection and robustness of input perturbations. We evaluate our method across widely-used GNN architectures on various popular datasets including the Open Graph Benchmark (OGB).} }
Endnote
%0 Conference Paper %T You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets %A Tianjin Huang %A Tianlong Chen %A Meng Fang %A Vlado Menkovski %A Jiaxu Zhao %A Lu Yin %A Yulong Pei %A Decebal Constantin Mocanu %A Zhangyang Wang %A Mykola Pechenizkiy %A Shiwei Liu %B Proceedings of the First Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2022 %E Bastian Rieck %E Razvan Pascanu %F pmlr-v198-huang22a %I PMLR %P 8:1--8:17 %U https://proceedings.mlr.press/v198/huang22a.html %V 198 %X Recent works have impressively demonstrated that there exists a subnetwork in randomly initialized convolutional neural networks (CNNs) that can match the performance of the fully trained dense networks at initialization, without any optimization of the weights of the network (i.e., untrained networks). However, the presence of such untrained subnetworks in graph neural networks (GNNs) still remains mysterious. In this paper we carry out the first-of-its-kind exploration of discovering matching untrained GNNs. With sparsity as the core tool, we can find untrained sparse subnetworks at the initialization, that can match the performance of fully trained dense GNNs. Besides this already encouraging finding of comparable performance, we show that the found untrained subnetworks can substantially mitigate the GNN over-smoothing problem, hence becoming a powerful tool to enable deeper GNNs without bells and whistles. We also observe that such sparse untrained subnetworks have appealing performance in out-of-distribution detection and robustness of input perturbations. We evaluate our method across widely-used GNN architectures on various popular datasets including the Open Graph Benchmark (OGB).
APA
Huang, T., Chen, T., Fang, M., Menkovski, V., Zhao, J., Yin, L., Pei, Y., Mocanu, D.C., Wang, Z., Pechenizkiy, M. & Liu, S.. (2022). You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets. Proceedings of the First Learning on Graphs Conference, in Proceedings of Machine Learning Research 198:8:1-8:17 Available from https://proceedings.mlr.press/v198/huang22a.html.

Related Material