Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning

Nader Asadi, Mohammadreza Davari, Sudhir Mudur, Rahaf Aljundi, Eugene Belilovsky
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:1093-1106, 2023.

Abstract

In Continual learning (CL) balancing effective adaptation while combating catastrophic forgetting is a central challenge. Many of the recent best-performing methods utilize various forms of prior task data, e.g. a replay buffer, to tackle the catastrophic forgetting problem. Having access to previous task data can be restrictive in many real-world scenarios, for example when task data is sensitive or proprietary. To overcome the necessity of using previous tasks’ data, in this work, we start with strong representation learning methods that have been shown to be less prone to forgetting. We propose a holistic approach to jointly learn the representation and class prototypes while maintaining the relevance of old class prototypes and their embedded similarities. Specifically, samples are mapped to an embedding space where the representations are learned using a supervised contrastive loss. Class prototypes are evolved continually in the same latent space, enabling learning and prediction at any point. To continually adapt the prototypes without keeping any prior task data, we propose a novel distillation loss that constrains class prototypes to maintain relative similarities as compared to new task data. This method yields state-of-the-art performance in the task-incremental setting, outperforming methods relying on large amounts of data, and provides strong performance in the class-incremental setting without using any stored data points.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-asadi23a, title = {Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning}, author = {Asadi, Nader and Davari, Mohammadreza and Mudur, Sudhir and Aljundi, Rahaf and Belilovsky, Eugene}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {1093--1106}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/asadi23a/asadi23a.pdf}, url = {https://proceedings.mlr.press/v202/asadi23a.html}, abstract = {In Continual learning (CL) balancing effective adaptation while combating catastrophic forgetting is a central challenge. Many of the recent best-performing methods utilize various forms of prior task data, e.g. a replay buffer, to tackle the catastrophic forgetting problem. Having access to previous task data can be restrictive in many real-world scenarios, for example when task data is sensitive or proprietary. To overcome the necessity of using previous tasks’ data, in this work, we start with strong representation learning methods that have been shown to be less prone to forgetting. We propose a holistic approach to jointly learn the representation and class prototypes while maintaining the relevance of old class prototypes and their embedded similarities. Specifically, samples are mapped to an embedding space where the representations are learned using a supervised contrastive loss. Class prototypes are evolved continually in the same latent space, enabling learning and prediction at any point. To continually adapt the prototypes without keeping any prior task data, we propose a novel distillation loss that constrains class prototypes to maintain relative similarities as compared to new task data. This method yields state-of-the-art performance in the task-incremental setting, outperforming methods relying on large amounts of data, and provides strong performance in the class-incremental setting without using any stored data points.} }
Endnote
%0 Conference Paper %T Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning %A Nader Asadi %A Mohammadreza Davari %A Sudhir Mudur %A Rahaf Aljundi %A Eugene Belilovsky %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-asadi23a %I PMLR %P 1093--1106 %U https://proceedings.mlr.press/v202/asadi23a.html %V 202 %X In Continual learning (CL) balancing effective adaptation while combating catastrophic forgetting is a central challenge. Many of the recent best-performing methods utilize various forms of prior task data, e.g. a replay buffer, to tackle the catastrophic forgetting problem. Having access to previous task data can be restrictive in many real-world scenarios, for example when task data is sensitive or proprietary. To overcome the necessity of using previous tasks’ data, in this work, we start with strong representation learning methods that have been shown to be less prone to forgetting. We propose a holistic approach to jointly learn the representation and class prototypes while maintaining the relevance of old class prototypes and their embedded similarities. Specifically, samples are mapped to an embedding space where the representations are learned using a supervised contrastive loss. Class prototypes are evolved continually in the same latent space, enabling learning and prediction at any point. To continually adapt the prototypes without keeping any prior task data, we propose a novel distillation loss that constrains class prototypes to maintain relative similarities as compared to new task data. This method yields state-of-the-art performance in the task-incremental setting, outperforming methods relying on large amounts of data, and provides strong performance in the class-incremental setting without using any stored data points.
APA
Asadi, N., Davari, M., Mudur, S., Aljundi, R. & Belilovsky, E.. (2023). Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:1093-1106 Available from https://proceedings.mlr.press/v202/asadi23a.html.

Related Material