Do Not Train It: A Linear Neural Architecture Search of Graph Neural Networks

Peng Xu, Lin Zhang, Xuanzhou Liu, Jiaqi Sun, Yue Zhao, Haiqin Yang, Bei Yu
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:38826-38847, 2023.

Abstract

Neural architecture search (NAS) for Graph neural networks (GNNs), called NAS-GNNs, has achieved significant performance over manually designed GNN architectures. However, these methods inherit issues from the conventional NAS methods, such as high computational cost and optimization difficulty. More importantly, previous NAS methods have ignored the uniqueness of GNNs, where GNNs possess expressive power without training. With the randomly-initialized weights, we can then seek the optimal architecture parameters via the sparse coding objective and derive a novel NAS-GNNs method, namely neural architecture coding (NAC). Consequently, our NAC holds a no-update scheme on GNNs and can efficiently compute in linear time. Empirical evaluations on multiple GNN benchmark datasets demonstrate that our approach leads to state-of-the-art performance, which is up to $200\times$ faster and $18.8%$ more accurate than the strong baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-xu23w, title = {Do Not Train It: A Linear Neural Architecture Search of Graph Neural Networks}, author = {Xu, Peng and Zhang, Lin and Liu, Xuanzhou and Sun, Jiaqi and Zhao, Yue and Yang, Haiqin and Yu, Bei}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {38826--38847}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/xu23w/xu23w.pdf}, url = {https://proceedings.mlr.press/v202/xu23w.html}, abstract = {Neural architecture search (NAS) for Graph neural networks (GNNs), called NAS-GNNs, has achieved significant performance over manually designed GNN architectures. However, these methods inherit issues from the conventional NAS methods, such as high computational cost and optimization difficulty. More importantly, previous NAS methods have ignored the uniqueness of GNNs, where GNNs possess expressive power without training. With the randomly-initialized weights, we can then seek the optimal architecture parameters via the sparse coding objective and derive a novel NAS-GNNs method, namely neural architecture coding (NAC). Consequently, our NAC holds a no-update scheme on GNNs and can efficiently compute in linear time. Empirical evaluations on multiple GNN benchmark datasets demonstrate that our approach leads to state-of-the-art performance, which is up to $200\times$ faster and $18.8%$ more accurate than the strong baselines.} }
Endnote
%0 Conference Paper %T Do Not Train It: A Linear Neural Architecture Search of Graph Neural Networks %A Peng Xu %A Lin Zhang %A Xuanzhou Liu %A Jiaqi Sun %A Yue Zhao %A Haiqin Yang %A Bei Yu %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-xu23w %I PMLR %P 38826--38847 %U https://proceedings.mlr.press/v202/xu23w.html %V 202 %X Neural architecture search (NAS) for Graph neural networks (GNNs), called NAS-GNNs, has achieved significant performance over manually designed GNN architectures. However, these methods inherit issues from the conventional NAS methods, such as high computational cost and optimization difficulty. More importantly, previous NAS methods have ignored the uniqueness of GNNs, where GNNs possess expressive power without training. With the randomly-initialized weights, we can then seek the optimal architecture parameters via the sparse coding objective and derive a novel NAS-GNNs method, namely neural architecture coding (NAC). Consequently, our NAC holds a no-update scheme on GNNs and can efficiently compute in linear time. Empirical evaluations on multiple GNN benchmark datasets demonstrate that our approach leads to state-of-the-art performance, which is up to $200\times$ faster and $18.8%$ more accurate than the strong baselines.
APA
Xu, P., Zhang, L., Liu, X., Sun, J., Zhao, Y., Yang, H. & Yu, B.. (2023). Do Not Train It: A Linear Neural Architecture Search of Graph Neural Networks. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:38826-38847 Available from https://proceedings.mlr.press/v202/xu23w.html.

Related Material