Towards Causal Replay for Knowledge Rehearsal in Continual Learning

Nikhil Churamani, Jiaee Cheong, Sinan Kalkan, Hatice Gunes
Proceedings of The First AAAI Bridge Program on Continual Causality, PMLR 208:63-70, 2023.

Abstract

Given the challenges associated with the real-world deployment of Machine Learning (ML) models, especially towards efficiently integrating novel information on-the-go, both Continual Learning (CL) and Causality have been proposed and investigated individually as potent solutions. Despite their complimentary nature, the bridge between them is still largely unexplored. In this work, we focus on causality to improve the learning and knowledge preservation capabilities of CL models. In particular, positing Causal Replay for knowledge rehearsal, we discuss how CL-based models can benefit from causal interventions towards improving their ability to replay past knowledge in order to mitigate forgetting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v208-churamani23a, title = {Towards Causal Replay for Knowledge Rehearsal in Continual Learning}, author = {Churamani, Nikhil and Cheong, Jiaee and Kalkan, Sinan and Gunes, Hatice}, booktitle = {Proceedings of The First AAAI Bridge Program on Continual Causality}, pages = {63--70}, year = {2023}, editor = {Mundt, Martin and Cooper, Keiland W. and Dhami, Devendra Singh and Ribeiro, Adéle and Smith, James Seale and Bellot, Alexis and Hayes, Tyler}, volume = {208}, series = {Proceedings of Machine Learning Research}, month = {07--08 Feb}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v208/churamani23a/churamani23a.pdf}, url = {https://proceedings.mlr.press/v208/churamani23a.html}, abstract = {Given the challenges associated with the real-world deployment of Machine Learning (ML) models, especially towards efficiently integrating novel information on-the-go, both Continual Learning (CL) and Causality have been proposed and investigated individually as potent solutions. Despite their complimentary nature, the bridge between them is still largely unexplored. In this work, we focus on causality to improve the learning and knowledge preservation capabilities of CL models. In particular, positing Causal Replay for knowledge rehearsal, we discuss how CL-based models can benefit from causal interventions towards improving their ability to replay past knowledge in order to mitigate forgetting.} }
Endnote
%0 Conference Paper %T Towards Causal Replay for Knowledge Rehearsal in Continual Learning %A Nikhil Churamani %A Jiaee Cheong %A Sinan Kalkan %A Hatice Gunes %B Proceedings of The First AAAI Bridge Program on Continual Causality %C Proceedings of Machine Learning Research %D 2023 %E Martin Mundt %E Keiland W. Cooper %E Devendra Singh Dhami %E Adéle Ribeiro %E James Seale Smith %E Alexis Bellot %E Tyler Hayes %F pmlr-v208-churamani23a %I PMLR %P 63--70 %U https://proceedings.mlr.press/v208/churamani23a.html %V 208 %X Given the challenges associated with the real-world deployment of Machine Learning (ML) models, especially towards efficiently integrating novel information on-the-go, both Continual Learning (CL) and Causality have been proposed and investigated individually as potent solutions. Despite their complimentary nature, the bridge between them is still largely unexplored. In this work, we focus on causality to improve the learning and knowledge preservation capabilities of CL models. In particular, positing Causal Replay for knowledge rehearsal, we discuss how CL-based models can benefit from causal interventions towards improving their ability to replay past knowledge in order to mitigate forgetting.
APA
Churamani, N., Cheong, J., Kalkan, S. & Gunes, H.. (2023). Towards Causal Replay for Knowledge Rehearsal in Continual Learning. Proceedings of The First AAAI Bridge Program on Continual Causality, in Proceedings of Machine Learning Research 208:63-70 Available from https://proceedings.mlr.press/v208/churamani23a.html.

Related Material