AutoCoG: A Unified Data-Model Co-Search Framework for Graph Neural Networks

Duc N.M Hoang, Kaixiong Zhou, Tianlong Chen, Xia Hu, Zhangyang Wang
Proceedings of the First International Conference on Automated Machine Learning, PMLR 188:4/1-16, 2022.

Abstract

Neural architecture search (NAS) has demonstrated success in discovering promising architectures for vision or language modeling tasks, and it has recently been introduced to searching for graph neural networks (GNNs) as well. Despite the preliminary success, GNNs struggle in dealing with heterophily or low-homophily graphs where connected nodes may have different class labels and dissimilar features. To this end, we propose co-optimizing both the input graph topology and the model’s architecture topology simultaneously. That yields AutoCoG, the first unified data-model co-search NAS framework for GNNs. By defining a highly flexible data-model co-search space, AutoCoG is gracefully formulated as a principled bi-level optimization that can be end-to-end solved by the differentiable search methods. Experiments show AutoCoG achieves gains of up to 4% for Actor, 7.3% on average for Web datasets, 0.17% for CoAuthor-CS, and finally 5.4% for Wikipedia-Photo benchmarks. All codes will be released upon paper acceptance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v188-hoang22a, title = {AutoCoG: A Unified Data-Model Co-Search Framework for Graph Neural Networks}, author = {Hoang, Duc N.M and Zhou, Kaixiong and Chen, Tianlong and Hu, Xia and Wang, Zhangyang}, booktitle = {Proceedings of the First International Conference on Automated Machine Learning}, pages = {4/1--16}, year = {2022}, editor = {Guyon, Isabelle and Lindauer, Marius and van der Schaar, Mihaela and Hutter, Frank and Garnett, Roman}, volume = {188}, series = {Proceedings of Machine Learning Research}, month = {25--27 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v188/hoang22a/hoang22a.pdf}, url = {https://proceedings.mlr.press/v188/hoang22a.html}, abstract = {Neural architecture search (NAS) has demonstrated success in discovering promising architectures for vision or language modeling tasks, and it has recently been introduced to searching for graph neural networks (GNNs) as well. Despite the preliminary success, GNNs struggle in dealing with heterophily or low-homophily graphs where connected nodes may have different class labels and dissimilar features. To this end, we propose co-optimizing both the input graph topology and the model’s architecture topology simultaneously. That yields AutoCoG, the first unified data-model co-search NAS framework for GNNs. By defining a highly flexible data-model co-search space, AutoCoG is gracefully formulated as a principled bi-level optimization that can be end-to-end solved by the differentiable search methods. Experiments show AutoCoG achieves gains of up to 4% for Actor, 7.3% on average for Web datasets, 0.17% for CoAuthor-CS, and finally 5.4% for Wikipedia-Photo benchmarks. All codes will be released upon paper acceptance.} }
Endnote
%0 Conference Paper %T AutoCoG: A Unified Data-Model Co-Search Framework for Graph Neural Networks %A Duc N.M Hoang %A Kaixiong Zhou %A Tianlong Chen %A Xia Hu %A Zhangyang Wang %B Proceedings of the First International Conference on Automated Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Isabelle Guyon %E Marius Lindauer %E Mihaela van der Schaar %E Frank Hutter %E Roman Garnett %F pmlr-v188-hoang22a %I PMLR %P 4/1--16 %U https://proceedings.mlr.press/v188/hoang22a.html %V 188 %X Neural architecture search (NAS) has demonstrated success in discovering promising architectures for vision or language modeling tasks, and it has recently been introduced to searching for graph neural networks (GNNs) as well. Despite the preliminary success, GNNs struggle in dealing with heterophily or low-homophily graphs where connected nodes may have different class labels and dissimilar features. To this end, we propose co-optimizing both the input graph topology and the model’s architecture topology simultaneously. That yields AutoCoG, the first unified data-model co-search NAS framework for GNNs. By defining a highly flexible data-model co-search space, AutoCoG is gracefully formulated as a principled bi-level optimization that can be end-to-end solved by the differentiable search methods. Experiments show AutoCoG achieves gains of up to 4% for Actor, 7.3% on average for Web datasets, 0.17% for CoAuthor-CS, and finally 5.4% for Wikipedia-Photo benchmarks. All codes will be released upon paper acceptance.
APA
Hoang, D.N., Zhou, K., Chen, T., Hu, X. & Wang, Z.. (2022). AutoCoG: A Unified Data-Model Co-Search Framework for Graph Neural Networks. Proceedings of the First International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 188:4/1-16 Available from https://proceedings.mlr.press/v188/hoang22a.html.

Related Material