On the Convergence of Continual Federated Learning Using Incrementally Aggregated Gradients

Satish Kumar Keshri, Nazreen Shah, Ranjitha Prasad
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:5068-5076, 2025.

Abstract

The holy grail of machine learning is to enable Continual Federated Learning (CFL) to enhance the efficiency, privacy, and scalability of AI systems while learning from streaming data. The primary challenge of a CFL system is to overcome global catastrophic forgetting, wherein the accuracy of the global model trained on new tasks declines on the old tasks. In this work, we propose \emph{Continual Federated Learning with Aggregated Gradients} (C-FLAG), a novel replay-memory based federated strategy consisting of edge-based gradient updates on memory and aggregated gradients on the current data. We provide convergence analysis of the C-FLAG approach which addresses forgetting and bias while converging at a rate of $O(1/\sqrt{T})$ over $T$ communication rounds. We formulate an optimization sub-problem that minimizes catastrophic forgetting, translating CFL into an iterative algorithm with adaptive learning rates that ensure seamless learning across tasks. We empirically show that C-FLAG outperforms several state-of-the-art baselines on both task and class-incremental settings with respect to metrics such as accuracy and forgetting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-keshri25a, title = {On the Convergence of Continual Federated Learning Using Incrementally Aggregated Gradients}, author = {Keshri, Satish Kumar and Shah, Nazreen and Prasad, Ranjitha}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {5068--5076}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/keshri25a/keshri25a.pdf}, url = {https://proceedings.mlr.press/v258/keshri25a.html}, abstract = {The holy grail of machine learning is to enable Continual Federated Learning (CFL) to enhance the efficiency, privacy, and scalability of AI systems while learning from streaming data. The primary challenge of a CFL system is to overcome global catastrophic forgetting, wherein the accuracy of the global model trained on new tasks declines on the old tasks. In this work, we propose \emph{Continual Federated Learning with Aggregated Gradients} (C-FLAG), a novel replay-memory based federated strategy consisting of edge-based gradient updates on memory and aggregated gradients on the current data. We provide convergence analysis of the C-FLAG approach which addresses forgetting and bias while converging at a rate of $O(1/\sqrt{T})$ over $T$ communication rounds. We formulate an optimization sub-problem that minimizes catastrophic forgetting, translating CFL into an iterative algorithm with adaptive learning rates that ensure seamless learning across tasks. We empirically show that C-FLAG outperforms several state-of-the-art baselines on both task and class-incremental settings with respect to metrics such as accuracy and forgetting.} }
Endnote
%0 Conference Paper %T On the Convergence of Continual Federated Learning Using Incrementally Aggregated Gradients %A Satish Kumar Keshri %A Nazreen Shah %A Ranjitha Prasad %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-keshri25a %I PMLR %P 5068--5076 %U https://proceedings.mlr.press/v258/keshri25a.html %V 258 %X The holy grail of machine learning is to enable Continual Federated Learning (CFL) to enhance the efficiency, privacy, and scalability of AI systems while learning from streaming data. The primary challenge of a CFL system is to overcome global catastrophic forgetting, wherein the accuracy of the global model trained on new tasks declines on the old tasks. In this work, we propose \emph{Continual Federated Learning with Aggregated Gradients} (C-FLAG), a novel replay-memory based federated strategy consisting of edge-based gradient updates on memory and aggregated gradients on the current data. We provide convergence analysis of the C-FLAG approach which addresses forgetting and bias while converging at a rate of $O(1/\sqrt{T})$ over $T$ communication rounds. We formulate an optimization sub-problem that minimizes catastrophic forgetting, translating CFL into an iterative algorithm with adaptive learning rates that ensure seamless learning across tasks. We empirically show that C-FLAG outperforms several state-of-the-art baselines on both task and class-incremental settings with respect to metrics such as accuracy and forgetting.
APA
Keshri, S.K., Shah, N. & Prasad, R.. (2025). On the Convergence of Continual Federated Learning Using Incrementally Aggregated Gradients. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:5068-5076 Available from https://proceedings.mlr.press/v258/keshri25a.html.

Related Material