AutoGFM: Automated Graph Foundation Model with Adaptive Architecture Customization

Haibo Chen, Xin Wang, Zeyang Zhang, Haoyang Li, Ling Feng, Wenwu Zhu
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:9295-9315, 2025.

Abstract

Graph foundation models (GFMs) aim to share graph knowledge across diverse domains and tasks to boost graph machine learning. However, existing GFMs rely on hand-designed and fixed graph neural network (GNN) architectures, failing to utilize optimal architectures w.r.t. specific domains and tasks, inevitably leading to suboptimal performance in diverse graph domains and tasks. In this paper, we explore graph neural architecture search (GNAS) for GFMs for the first time, which suffers from the problem of architecture inconsistency, i.e., the optimal architectures for different tasks and domains vary. We tackle this problem by discovering an invariant graph-architecture relationship across domains and tasks, which imposes three challenges: i) how to capture invariant and variant patterns; ii) how to customize architectures to adapt to diverse domains and tasks; iii) how to mitigate the data domination phenomenon during the architecture search process. To address these challenges, we propose Automated Graph Foundation Model with Adaptive Architecture Customization (AutoGFM), providing a theoretical analysis to demonstrate the limitations of existing GNAS. Specifically, we first propose a disentangled contrastive graph encoder to learn invariant and variant patterns. Then, we design an invariant-guided architecture customization strategy to customize architectures for data from diverse domains and tasks. Finally, we propose a curriculum architecture customization mechanism to mitigate the phenomenon of particular data dominating the search process. Extensive experiments demonstrate that AutoGFM outperforms baselines, achieving state-of-the-art performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-chen25bp, title = {{A}uto{GFM}: Automated Graph Foundation Model with Adaptive Architecture Customization}, author = {Chen, Haibo and Wang, Xin and Zhang, Zeyang and Li, Haoyang and Feng, Ling and Zhu, Wenwu}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {9295--9315}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/chen25bp/chen25bp.pdf}, url = {https://proceedings.mlr.press/v267/chen25bp.html}, abstract = {Graph foundation models (GFMs) aim to share graph knowledge across diverse domains and tasks to boost graph machine learning. However, existing GFMs rely on hand-designed and fixed graph neural network (GNN) architectures, failing to utilize optimal architectures w.r.t. specific domains and tasks, inevitably leading to suboptimal performance in diverse graph domains and tasks. In this paper, we explore graph neural architecture search (GNAS) for GFMs for the first time, which suffers from the problem of architecture inconsistency, i.e., the optimal architectures for different tasks and domains vary. We tackle this problem by discovering an invariant graph-architecture relationship across domains and tasks, which imposes three challenges: i) how to capture invariant and variant patterns; ii) how to customize architectures to adapt to diverse domains and tasks; iii) how to mitigate the data domination phenomenon during the architecture search process. To address these challenges, we propose Automated Graph Foundation Model with Adaptive Architecture Customization (AutoGFM), providing a theoretical analysis to demonstrate the limitations of existing GNAS. Specifically, we first propose a disentangled contrastive graph encoder to learn invariant and variant patterns. Then, we design an invariant-guided architecture customization strategy to customize architectures for data from diverse domains and tasks. Finally, we propose a curriculum architecture customization mechanism to mitigate the phenomenon of particular data dominating the search process. Extensive experiments demonstrate that AutoGFM outperforms baselines, achieving state-of-the-art performance.} }
Endnote
%0 Conference Paper %T AutoGFM: Automated Graph Foundation Model with Adaptive Architecture Customization %A Haibo Chen %A Xin Wang %A Zeyang Zhang %A Haoyang Li %A Ling Feng %A Wenwu Zhu %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-chen25bp %I PMLR %P 9295--9315 %U https://proceedings.mlr.press/v267/chen25bp.html %V 267 %X Graph foundation models (GFMs) aim to share graph knowledge across diverse domains and tasks to boost graph machine learning. However, existing GFMs rely on hand-designed and fixed graph neural network (GNN) architectures, failing to utilize optimal architectures w.r.t. specific domains and tasks, inevitably leading to suboptimal performance in diverse graph domains and tasks. In this paper, we explore graph neural architecture search (GNAS) for GFMs for the first time, which suffers from the problem of architecture inconsistency, i.e., the optimal architectures for different tasks and domains vary. We tackle this problem by discovering an invariant graph-architecture relationship across domains and tasks, which imposes three challenges: i) how to capture invariant and variant patterns; ii) how to customize architectures to adapt to diverse domains and tasks; iii) how to mitigate the data domination phenomenon during the architecture search process. To address these challenges, we propose Automated Graph Foundation Model with Adaptive Architecture Customization (AutoGFM), providing a theoretical analysis to demonstrate the limitations of existing GNAS. Specifically, we first propose a disentangled contrastive graph encoder to learn invariant and variant patterns. Then, we design an invariant-guided architecture customization strategy to customize architectures for data from diverse domains and tasks. Finally, we propose a curriculum architecture customization mechanism to mitigate the phenomenon of particular data dominating the search process. Extensive experiments demonstrate that AutoGFM outperforms baselines, achieving state-of-the-art performance.
APA
Chen, H., Wang, X., Zhang, Z., Li, H., Feng, L. & Zhu, W.. (2025). AutoGFM: Automated Graph Foundation Model with Adaptive Architecture Customization. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:9295-9315 Available from https://proceedings.mlr.press/v267/chen25bp.html.

Related Material