Divide and Orthogonalize: Efficient Continual Learning with Local Model Space Projection

Jin Shang, Simone Shao, Tian Tong, Fan Yang, Yetian Chen, Yang Jiao, Jia Liu, Yan Gao
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:3766-3786, 2025.

Abstract

Continual learning (CL) has gained increasing interest in recent years due to the need for models that can continuously learn new tasks while retaining knowledge from previous ones. However, existing CL methods often require either computationally expensive layer-wise gradient projections or large-scale storage of past task data, making them impractical for resource-constrained scenarios. To address these challenges, we propose a local model space projection (LMSP)-based continual learning framework that significantly reduces computational complexity from $\mathcal{O}(n^3)$ to $\mathcal{O}(n^2)$ while preserving both forward and backward knowledge transfer with minimal performance trade-offs. We establish a theoretical analysis of the error and convergence properties of LMSP compared to conventional global approaches. Extensive experiments on multiple public datasets demonstrate that our method achieves competitive performance while offering substantial efficiency gains, making it a promising solution for scalable continual learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-shang25a, title = {Divide and Orthogonalize: Efficient Continual Learning with Local Model Space Projection}, author = {Shang, Jin and Shao, Simone and Tong, Tian and Yang, Fan and Chen, Yetian and Jiao, Yang and Liu, Jia and Gao, Yan}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {3766--3786}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/shang25a/shang25a.pdf}, url = {https://proceedings.mlr.press/v286/shang25a.html}, abstract = {Continual learning (CL) has gained increasing interest in recent years due to the need for models that can continuously learn new tasks while retaining knowledge from previous ones. However, existing CL methods often require either computationally expensive layer-wise gradient projections or large-scale storage of past task data, making them impractical for resource-constrained scenarios. To address these challenges, we propose a local model space projection (LMSP)-based continual learning framework that significantly reduces computational complexity from $\mathcal{O}(n^3)$ to $\mathcal{O}(n^2)$ while preserving both forward and backward knowledge transfer with minimal performance trade-offs. We establish a theoretical analysis of the error and convergence properties of LMSP compared to conventional global approaches. Extensive experiments on multiple public datasets demonstrate that our method achieves competitive performance while offering substantial efficiency gains, making it a promising solution for scalable continual learning.} }
Endnote
%0 Conference Paper %T Divide and Orthogonalize: Efficient Continual Learning with Local Model Space Projection %A Jin Shang %A Simone Shao %A Tian Tong %A Fan Yang %A Yetian Chen %A Yang Jiao %A Jia Liu %A Yan Gao %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-shang25a %I PMLR %P 3766--3786 %U https://proceedings.mlr.press/v286/shang25a.html %V 286 %X Continual learning (CL) has gained increasing interest in recent years due to the need for models that can continuously learn new tasks while retaining knowledge from previous ones. However, existing CL methods often require either computationally expensive layer-wise gradient projections or large-scale storage of past task data, making them impractical for resource-constrained scenarios. To address these challenges, we propose a local model space projection (LMSP)-based continual learning framework that significantly reduces computational complexity from $\mathcal{O}(n^3)$ to $\mathcal{O}(n^2)$ while preserving both forward and backward knowledge transfer with minimal performance trade-offs. We establish a theoretical analysis of the error and convergence properties of LMSP compared to conventional global approaches. Extensive experiments on multiple public datasets demonstrate that our method achieves competitive performance while offering substantial efficiency gains, making it a promising solution for scalable continual learning.
APA
Shang, J., Shao, S., Tong, T., Yang, F., Chen, Y., Jiao, Y., Liu, J. & Gao, Y.. (2025). Divide and Orthogonalize: Efficient Continual Learning with Local Model Space Projection. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:3766-3786 Available from https://proceedings.mlr.press/v286/shang25a.html.

Related Material