Issues for Continual Learning in the Presence of Dataset Bias

Donggyu Lee, Sangwon Jung, Taesup Moon
Proceedings of The First AAAI Bridge Program on Continual Causality, PMLR 208:92-99, 2023.

Abstract

While most continual learning algorithms have focused on tackling the stability-plasticity dilemma, they have overlooked the effects of the knowledge transfer when the dataset is biased — namely, when some unintended spurious correlations, not the true causal structures, of the tasks are learned from the biased dataset. In that case, how would they affect learning future tasks or the knowledge already learned from the past tasks? In this work, we design systematic experiments with a synthetic biased dataset and try to answer the above question from our empirical findings. Namely, we first show that standard continual learning methods that are unaware of dataset bias can transfer biases from one task to another, both forward and backward. In addition, we find that naively using existing debiasing methods after each continual learning step can lead to significant forgetting of past tasks and reduced overall continual learning performance. These findings highlight the need for a causality-aware design of continual learning algorithms to prevent both bias transfers and catastrophic forgetting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v208-lee23a, title = {Issues for Continual Learning in the Presence of Dataset Bias}, author = {Lee, Donggyu and Jung, Sangwon and Moon, Taesup}, booktitle = {Proceedings of The First AAAI Bridge Program on Continual Causality}, pages = {92--99}, year = {2023}, editor = {Mundt, Martin and Cooper, Keiland W. and Dhami, Devendra Singh and Ribeiro, Adéle and Smith, James Seale and Bellot, Alexis and Hayes, Tyler}, volume = {208}, series = {Proceedings of Machine Learning Research}, month = {07--08 Feb}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v208/lee23a/lee23a.pdf}, url = {https://proceedings.mlr.press/v208/lee23a.html}, abstract = {While most continual learning algorithms have focused on tackling the stability-plasticity dilemma, they have overlooked the effects of the knowledge transfer when the dataset is biased — namely, when some unintended spurious correlations, not the true causal structures, of the tasks are learned from the biased dataset. In that case, how would they affect learning future tasks or the knowledge already learned from the past tasks? In this work, we design systematic experiments with a synthetic biased dataset and try to answer the above question from our empirical findings. Namely, we first show that standard continual learning methods that are unaware of dataset bias can transfer biases from one task to another, both forward and backward. In addition, we find that naively using existing debiasing methods after each continual learning step can lead to significant forgetting of past tasks and reduced overall continual learning performance. These findings highlight the need for a causality-aware design of continual learning algorithms to prevent both bias transfers and catastrophic forgetting.} }
Endnote
%0 Conference Paper %T Issues for Continual Learning in the Presence of Dataset Bias %A Donggyu Lee %A Sangwon Jung %A Taesup Moon %B Proceedings of The First AAAI Bridge Program on Continual Causality %C Proceedings of Machine Learning Research %D 2023 %E Martin Mundt %E Keiland W. Cooper %E Devendra Singh Dhami %E Adéle Ribeiro %E James Seale Smith %E Alexis Bellot %E Tyler Hayes %F pmlr-v208-lee23a %I PMLR %P 92--99 %U https://proceedings.mlr.press/v208/lee23a.html %V 208 %X While most continual learning algorithms have focused on tackling the stability-plasticity dilemma, they have overlooked the effects of the knowledge transfer when the dataset is biased — namely, when some unintended spurious correlations, not the true causal structures, of the tasks are learned from the biased dataset. In that case, how would they affect learning future tasks or the knowledge already learned from the past tasks? In this work, we design systematic experiments with a synthetic biased dataset and try to answer the above question from our empirical findings. Namely, we first show that standard continual learning methods that are unaware of dataset bias can transfer biases from one task to another, both forward and backward. In addition, we find that naively using existing debiasing methods after each continual learning step can lead to significant forgetting of past tasks and reduced overall continual learning performance. These findings highlight the need for a causality-aware design of continual learning algorithms to prevent both bias transfers and catastrophic forgetting.
APA
Lee, D., Jung, S. & Moon, T.. (2023). Issues for Continual Learning in the Presence of Dataset Bias. Proceedings of The First AAAI Bridge Program on Continual Causality, in Proceedings of Machine Learning Research 208:92-99 Available from https://proceedings.mlr.press/v208/lee23a.html.

Related Material