GCAL: Adapting Graph Models to Evolving Domain Shifts

Ziyue Qiao, Qianyi Cai, Hao Dong, Jiawei Gu, Pengyang Wang, Meng Xiao, Xiao Luo, Hui Xiong
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:50113-50129, 2025.

Abstract

This paper addresses the challenge of graph domain adaptation on evolving, multiple out-of-distribution (OOD) graphs. Conventional graph domain adaptation methods are confined to single-step adaptation, making them ineffective in handling continuous domain shifts and prone to catastrophic forgetting. This paper introduces the Graph Continual Adaptive Learning (GCAL) method, designed to enhance model sustainability and adaptability across various graph domains. GCAL employs a bilevel optimization strategy. The "adapt" phase uses an information maximization approach to fine-tune the model with new graph domains while re-adapting past memories to mitigate forgetting. Concurrently, the "generate memory" phase, guided by a theoretical lower bound derived from information bottleneck theory, involves a variational memory graph generation module to condense original graphs into memories. Extensive experimental evaluations demonstrate that GCAL substantially outperforms existing methods in terms of adaptability and knowledge retention.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-qiao25a, title = {{GCAL}: Adapting Graph Models to Evolving Domain Shifts}, author = {Qiao, Ziyue and Cai, Qianyi and Dong, Hao and Gu, Jiawei and Wang, Pengyang and Xiao, Meng and Luo, Xiao and Xiong, Hui}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {50113--50129}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/qiao25a/qiao25a.pdf}, url = {https://proceedings.mlr.press/v267/qiao25a.html}, abstract = {This paper addresses the challenge of graph domain adaptation on evolving, multiple out-of-distribution (OOD) graphs. Conventional graph domain adaptation methods are confined to single-step adaptation, making them ineffective in handling continuous domain shifts and prone to catastrophic forgetting. This paper introduces the Graph Continual Adaptive Learning (GCAL) method, designed to enhance model sustainability and adaptability across various graph domains. GCAL employs a bilevel optimization strategy. The "adapt" phase uses an information maximization approach to fine-tune the model with new graph domains while re-adapting past memories to mitigate forgetting. Concurrently, the "generate memory" phase, guided by a theoretical lower bound derived from information bottleneck theory, involves a variational memory graph generation module to condense original graphs into memories. Extensive experimental evaluations demonstrate that GCAL substantially outperforms existing methods in terms of adaptability and knowledge retention.} }
Endnote
%0 Conference Paper %T GCAL: Adapting Graph Models to Evolving Domain Shifts %A Ziyue Qiao %A Qianyi Cai %A Hao Dong %A Jiawei Gu %A Pengyang Wang %A Meng Xiao %A Xiao Luo %A Hui Xiong %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-qiao25a %I PMLR %P 50113--50129 %U https://proceedings.mlr.press/v267/qiao25a.html %V 267 %X This paper addresses the challenge of graph domain adaptation on evolving, multiple out-of-distribution (OOD) graphs. Conventional graph domain adaptation methods are confined to single-step adaptation, making them ineffective in handling continuous domain shifts and prone to catastrophic forgetting. This paper introduces the Graph Continual Adaptive Learning (GCAL) method, designed to enhance model sustainability and adaptability across various graph domains. GCAL employs a bilevel optimization strategy. The "adapt" phase uses an information maximization approach to fine-tune the model with new graph domains while re-adapting past memories to mitigate forgetting. Concurrently, the "generate memory" phase, guided by a theoretical lower bound derived from information bottleneck theory, involves a variational memory graph generation module to condense original graphs into memories. Extensive experimental evaluations demonstrate that GCAL substantially outperforms existing methods in terms of adaptability and knowledge retention.
APA
Qiao, Z., Cai, Q., Dong, H., Gu, J., Wang, P., Xiao, M., Luo, X. & Xiong, H.. (2025). GCAL: Adapting Graph Models to Evolving Domain Shifts. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:50113-50129 Available from https://proceedings.mlr.press/v267/qiao25a.html.

Related Material