GHOST: Generalizable One-Shot Federated Graph Learning with Proxy-Based Topology Knowledge Retention

Jiaru Qian, Guancheng Wan, Wenke Huang, Guibin Zhang, Yuxin Wu, Bo Du, Mang Ye
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:50047-50065, 2025.

Abstract

Federated Graph Learning (FGL) proposes an effective approach to collaboratively training Graph Neural Networks (GNNs) while maintaining privacy. Nevertheless, communication efficiency becomes a critical bottleneck in environments with limited resources. In this context, one-shot FGL emerges as a promising solution by restricting communication to a single round. However, prevailing FGL methods face two key challenges in the one-shot setting: 1) They heavily rely on gradual personalized optimization over multiple rounds, undermining the capability of the global model to efficiently generalize across diverse graph structures. 2) They are prone to overfitting to local data distributions due to extreme structural bias, leading to catastrophic forgetting. To address these issues, we introduce GHOST, an innovative one-shot FGL framework. In GHOST, we establish a proxy model for each client to leverage diverse local knowledge and integrate it to train the global model. During training, we identify and consolidate parameters essential for capturing topological knowledge, thereby mitigating catastrophic forgetting. Extensive experiments on real-world tasks demonstrate the superiority and generalization capability of GHOST. The code is available at https://github.com/JiaruQian/GHOST.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-qian25a, title = {{GHOST}: Generalizable One-Shot Federated Graph Learning with Proxy-Based Topology Knowledge Retention}, author = {Qian, Jiaru and Wan, Guancheng and Huang, Wenke and Zhang, Guibin and Wu, Yuxin and Du, Bo and Ye, Mang}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {50047--50065}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/qian25a/qian25a.pdf}, url = {https://proceedings.mlr.press/v267/qian25a.html}, abstract = {Federated Graph Learning (FGL) proposes an effective approach to collaboratively training Graph Neural Networks (GNNs) while maintaining privacy. Nevertheless, communication efficiency becomes a critical bottleneck in environments with limited resources. In this context, one-shot FGL emerges as a promising solution by restricting communication to a single round. However, prevailing FGL methods face two key challenges in the one-shot setting: 1) They heavily rely on gradual personalized optimization over multiple rounds, undermining the capability of the global model to efficiently generalize across diverse graph structures. 2) They are prone to overfitting to local data distributions due to extreme structural bias, leading to catastrophic forgetting. To address these issues, we introduce GHOST, an innovative one-shot FGL framework. In GHOST, we establish a proxy model for each client to leverage diverse local knowledge and integrate it to train the global model. During training, we identify and consolidate parameters essential for capturing topological knowledge, thereby mitigating catastrophic forgetting. Extensive experiments on real-world tasks demonstrate the superiority and generalization capability of GHOST. The code is available at https://github.com/JiaruQian/GHOST.} }
Endnote
%0 Conference Paper %T GHOST: Generalizable One-Shot Federated Graph Learning with Proxy-Based Topology Knowledge Retention %A Jiaru Qian %A Guancheng Wan %A Wenke Huang %A Guibin Zhang %A Yuxin Wu %A Bo Du %A Mang Ye %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-qian25a %I PMLR %P 50047--50065 %U https://proceedings.mlr.press/v267/qian25a.html %V 267 %X Federated Graph Learning (FGL) proposes an effective approach to collaboratively training Graph Neural Networks (GNNs) while maintaining privacy. Nevertheless, communication efficiency becomes a critical bottleneck in environments with limited resources. In this context, one-shot FGL emerges as a promising solution by restricting communication to a single round. However, prevailing FGL methods face two key challenges in the one-shot setting: 1) They heavily rely on gradual personalized optimization over multiple rounds, undermining the capability of the global model to efficiently generalize across diverse graph structures. 2) They are prone to overfitting to local data distributions due to extreme structural bias, leading to catastrophic forgetting. To address these issues, we introduce GHOST, an innovative one-shot FGL framework. In GHOST, we establish a proxy model for each client to leverage diverse local knowledge and integrate it to train the global model. During training, we identify and consolidate parameters essential for capturing topological knowledge, thereby mitigating catastrophic forgetting. Extensive experiments on real-world tasks demonstrate the superiority and generalization capability of GHOST. The code is available at https://github.com/JiaruQian/GHOST.
APA
Qian, J., Wan, G., Huang, W., Zhang, G., Wu, Y., Du, B. & Ye, M.. (2025). GHOST: Generalizable One-Shot Federated Graph Learning with Proxy-Based Topology Knowledge Retention. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:50047-50065 Available from https://proceedings.mlr.press/v267/qian25a.html.

Related Material