Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method

Kishaan Jeeveswaran, Elahe Arani, Bahram Zonooz
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:21486-21501, 2024.

Abstract

Domain incremental learning (DIL) poses a significant challenge in real-world scenarios, as models need to be sequentially trained on diverse domains over time, all the while avoiding catastrophic forgetting. Mitigating representation drift, which refers to the phenomenon of learned representations undergoing changes as the model adapts to new tasks, can help alleviate catastrophic forgetting. In this study, we propose a novel DIL method named DARE, featuring a three-stage training process: Divergence, Adaptation, and REfinement. This process gradually adapts the representations associated with new tasks into the feature space spanned by samples from previous tasks, simultaneously integrating task-specific decision boundaries. Additionally, we introduce a novel strategy for buffer sampling and demonstrate the effectiveness of our proposed method, combined with this sampling strategy, in reducing representation drift within the feature encoder. This contribution effectively alleviates catastrophic forgetting across multiple DIL benchmarks. Furthermore, our approach prevents sudden representation drift at task boundaries, resulting in a well-calibrated DIL model that maintains the performance on previous tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-jeeveswaran24a, title = {Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method}, author = {Jeeveswaran, Kishaan and Arani, Elahe and Zonooz, Bahram}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {21486--21501}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/jeeveswaran24a/jeeveswaran24a.pdf}, url = {https://proceedings.mlr.press/v235/jeeveswaran24a.html}, abstract = {Domain incremental learning (DIL) poses a significant challenge in real-world scenarios, as models need to be sequentially trained on diverse domains over time, all the while avoiding catastrophic forgetting. Mitigating representation drift, which refers to the phenomenon of learned representations undergoing changes as the model adapts to new tasks, can help alleviate catastrophic forgetting. In this study, we propose a novel DIL method named DARE, featuring a three-stage training process: Divergence, Adaptation, and REfinement. This process gradually adapts the representations associated with new tasks into the feature space spanned by samples from previous tasks, simultaneously integrating task-specific decision boundaries. Additionally, we introduce a novel strategy for buffer sampling and demonstrate the effectiveness of our proposed method, combined with this sampling strategy, in reducing representation drift within the feature encoder. This contribution effectively alleviates catastrophic forgetting across multiple DIL benchmarks. Furthermore, our approach prevents sudden representation drift at task boundaries, resulting in a well-calibrated DIL model that maintains the performance on previous tasks.} }
Endnote
%0 Conference Paper %T Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method %A Kishaan Jeeveswaran %A Elahe Arani %A Bahram Zonooz %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-jeeveswaran24a %I PMLR %P 21486--21501 %U https://proceedings.mlr.press/v235/jeeveswaran24a.html %V 235 %X Domain incremental learning (DIL) poses a significant challenge in real-world scenarios, as models need to be sequentially trained on diverse domains over time, all the while avoiding catastrophic forgetting. Mitigating representation drift, which refers to the phenomenon of learned representations undergoing changes as the model adapts to new tasks, can help alleviate catastrophic forgetting. In this study, we propose a novel DIL method named DARE, featuring a three-stage training process: Divergence, Adaptation, and REfinement. This process gradually adapts the representations associated with new tasks into the feature space spanned by samples from previous tasks, simultaneously integrating task-specific decision boundaries. Additionally, we introduce a novel strategy for buffer sampling and demonstrate the effectiveness of our proposed method, combined with this sampling strategy, in reducing representation drift within the feature encoder. This contribution effectively alleviates catastrophic forgetting across multiple DIL benchmarks. Furthermore, our approach prevents sudden representation drift at task boundaries, resulting in a well-calibrated DIL model that maintains the performance on previous tasks.
APA
Jeeveswaran, K., Arani, E. & Zonooz, B.. (2024). Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:21486-21501 Available from https://proceedings.mlr.press/v235/jeeveswaran24a.html.

Related Material